Test Report: KVM_Linux_crio 17671

                    
                      199a0e3eaea8884b6f30e504f56bf5d155934cac:2023-11-28:32061
                    
                

Test fail (27/304)

Order failed test Duration
35 TestAddons/parallel/Ingress 156.85
48 TestAddons/StoppedEnableDisable 155.24
164 TestIngressAddonLegacy/serial/ValidateIngressAddons 169.16
212 TestMultiNode/serial/PingHostFrom2Pods 3.37
218 TestMultiNode/serial/RestartKeepsNodes 701.08
220 TestMultiNode/serial/StopMultiNode 143.84
227 TestPreload 279.85
233 TestRunningBinaryUpgrade 193.74
259 TestStoppedBinaryUpgrade/Upgrade 290.19
332 TestStartStop/group/old-k8s-version/serial/Stop 140.41
335 TestStartStop/group/newest-cni/serial/Stop 139.54
339 TestStartStop/group/no-preload/serial/Stop 139.52
341 TestStartStop/group/default-k8s-diff-port/serial/Stop 140.2
342 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.42
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 12.42
346 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
358 TestStartStop/group/embed-certs/serial/Stop 139.41
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
361 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.52
362 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.54
363 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 542.62
364 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 378.57
365 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 342.19
366 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.46
367 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 169
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 329.41
x
+
TestAddons/parallel/Ingress (156.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:206: (dbg) Run:  kubectl --context addons-681229 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:231: (dbg) Run:  kubectl --context addons-681229 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context addons-681229 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3fc8c277-fb76-4adf-9332-9e20e1d69cb5] Pending
helpers_test.go:344: "nginx" [3fc8c277-fb76-4adf-9332-9e20e1d69cb5] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3fc8c277-fb76-4adf-9332-9e20e1d69cb5] Running
addons_test.go:249: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.018220724s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p addons-681229 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-681229 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.913352959s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context addons-681229 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p addons-681229 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.100
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p addons-681229 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p addons-681229 addons disable ingress-dns --alsologtostderr -v=1: (1.491803319s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p addons-681229 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p addons-681229 addons disable ingress --alsologtostderr -v=1: (7.825823653s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-681229 -n addons-681229
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-681229 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-681229 logs -n 25: (1.385665166s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-780173 | jenkins | v1.32.0 | 28 Nov 23 02:41 UTC |                     |
	|         | -p download-only-780173                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 28 Nov 23 02:41 UTC | 28 Nov 23 02:41 UTC |
	| delete  | -p download-only-780173                                                                     | download-only-780173 | jenkins | v1.32.0 | 28 Nov 23 02:41 UTC | 28 Nov 23 02:41 UTC |
	| delete  | -p download-only-780173                                                                     | download-only-780173 | jenkins | v1.32.0 | 28 Nov 23 02:41 UTC | 28 Nov 23 02:41 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-179554 | jenkins | v1.32.0 | 28 Nov 23 02:41 UTC |                     |
	|         | binary-mirror-179554                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:40849                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-179554                                                                     | binary-mirror-179554 | jenkins | v1.32.0 | 28 Nov 23 02:41 UTC | 28 Nov 23 02:41 UTC |
	| addons  | disable dashboard -p                                                                        | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:41 UTC |                     |
	|         | addons-681229                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:41 UTC |                     |
	|         | addons-681229                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-681229 --wait=true                                                                | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:41 UTC | 28 Nov 23 02:43 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-681229 addons                                                                        | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:43 UTC | 28 Nov 23 02:43 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:43 UTC | 28 Nov 23 02:44 UTC |
	|         | addons-681229                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-681229 ssh cat                                                                       | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:43 UTC | 28 Nov 23 02:43 UTC |
	|         | /opt/local-path-provisioner/pvc-b94d0112-df69-4553-9f14-bebd2794b54c_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-681229 addons disable                                                                | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:43 UTC | 28 Nov 23 02:44 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-681229 ip                                                                            | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:43 UTC | 28 Nov 23 02:43 UTC |
	| addons  | addons-681229 addons disable                                                                | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:43 UTC | 28 Nov 23 02:43 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-681229 ssh curl -s                                                                   | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:44 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-681229 addons disable                                                                | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:44 UTC | 28 Nov 23 02:44 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:44 UTC | 28 Nov 23 02:44 UTC |
	|         | -p addons-681229                                                                            |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:44 UTC | 28 Nov 23 02:44 UTC |
	|         | addons-681229                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:44 UTC | 28 Nov 23 02:44 UTC |
	|         | -p addons-681229                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-681229 addons                                                                        | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:45 UTC | 28 Nov 23 02:45 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-681229 addons                                                                        | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:45 UTC | 28 Nov 23 02:45 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-681229 ip                                                                            | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:46 UTC | 28 Nov 23 02:46 UTC |
	| addons  | addons-681229 addons disable                                                                | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:46 UTC | 28 Nov 23 02:46 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-681229 addons disable                                                                | addons-681229        | jenkins | v1.32.0 | 28 Nov 23 02:46 UTC | 28 Nov 23 02:46 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 02:41:14
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 02:41:14.070710  340927 out.go:296] Setting OutFile to fd 1 ...
	I1128 02:41:14.070899  340927 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 02:41:14.070910  340927 out.go:309] Setting ErrFile to fd 2...
	I1128 02:41:14.070914  340927 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 02:41:14.071127  340927 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 02:41:14.071741  340927 out.go:303] Setting JSON to false
	I1128 02:41:14.073274  340927 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5024,"bootTime":1701134250,"procs":943,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 02:41:14.073347  340927 start.go:138] virtualization: kvm guest
	I1128 02:41:14.075592  340927 out.go:177] * [addons-681229] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 02:41:14.077210  340927 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 02:41:14.078615  340927 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 02:41:14.077278  340927 notify.go:220] Checking for updates...
	I1128 02:41:14.080135  340927 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 02:41:14.081569  340927 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 02:41:14.082939  340927 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 02:41:14.084364  340927 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 02:41:14.086005  340927 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 02:41:14.119445  340927 out.go:177] * Using the kvm2 driver based on user configuration
	I1128 02:41:14.121036  340927 start.go:298] selected driver: kvm2
	I1128 02:41:14.121055  340927 start.go:902] validating driver "kvm2" against <nil>
	I1128 02:41:14.121067  340927 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 02:41:14.121745  340927 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 02:41:14.121847  340927 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 02:41:14.137193  340927 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 02:41:14.137305  340927 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1128 02:41:14.137538  340927 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 02:41:14.137615  340927 cni.go:84] Creating CNI manager for ""
	I1128 02:41:14.137639  340927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 02:41:14.137660  340927 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1128 02:41:14.137673  340927 start_flags.go:323] config:
	{Name:addons-681229 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-681229 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 02:41:14.137871  340927 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 02:41:14.139798  340927 out.go:177] * Starting control plane node addons-681229 in cluster addons-681229
	I1128 02:41:14.141156  340927 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 02:41:14.141201  340927 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 02:41:14.141212  340927 cache.go:56] Caching tarball of preloaded images
	I1128 02:41:14.141310  340927 preload.go:174] Found /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 02:41:14.141323  340927 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 02:41:14.141683  340927 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/config.json ...
	I1128 02:41:14.141717  340927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/config.json: {Name:mkeeac4345b4261462cf4b89d010fccfb41b1b44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:41:14.141872  340927 start.go:365] acquiring machines lock for addons-681229: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 02:41:14.141934  340927 start.go:369] acquired machines lock for "addons-681229" in 46.043µs
	I1128 02:41:14.141967  340927 start.go:93] Provisioning new machine with config: &{Name:addons-681229 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-681229 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 02:41:14.142069  340927 start.go:125] createHost starting for "" (driver="kvm2")
	I1128 02:41:14.145109  340927 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1128 02:41:14.145297  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:41:14.145347  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:41:14.159829  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46619
	I1128 02:41:14.160343  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:41:14.160923  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:41:14.160947  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:41:14.161330  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:41:14.161521  340927 main.go:141] libmachine: (addons-681229) Calling .GetMachineName
	I1128 02:41:14.161677  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:41:14.161815  340927 start.go:159] libmachine.API.Create for "addons-681229" (driver="kvm2")
	I1128 02:41:14.161843  340927 client.go:168] LocalClient.Create starting
	I1128 02:41:14.161876  340927 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem
	I1128 02:41:14.240921  340927 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem
	I1128 02:41:14.380794  340927 main.go:141] libmachine: Running pre-create checks...
	I1128 02:41:14.380822  340927 main.go:141] libmachine: (addons-681229) Calling .PreCreateCheck
	I1128 02:41:14.381412  340927 main.go:141] libmachine: (addons-681229) Calling .GetConfigRaw
	I1128 02:41:14.381868  340927 main.go:141] libmachine: Creating machine...
	I1128 02:41:14.381884  340927 main.go:141] libmachine: (addons-681229) Calling .Create
	I1128 02:41:14.382073  340927 main.go:141] libmachine: (addons-681229) Creating KVM machine...
	I1128 02:41:14.383321  340927 main.go:141] libmachine: (addons-681229) DBG | found existing default KVM network
	I1128 02:41:14.384112  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:14.383939  340949 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a50}
	I1128 02:41:14.389719  340927 main.go:141] libmachine: (addons-681229) DBG | trying to create private KVM network mk-addons-681229 192.168.39.0/24...
	I1128 02:41:14.458952  340927 main.go:141] libmachine: (addons-681229) DBG | private KVM network mk-addons-681229 192.168.39.0/24 created
	I1128 02:41:14.458991  340927 main.go:141] libmachine: (addons-681229) Setting up store path in /home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229 ...
	I1128 02:41:14.459006  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:14.458880  340949 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 02:41:14.459020  340927 main.go:141] libmachine: (addons-681229) Building disk image from file:///home/jenkins/minikube-integration/17671-333305/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso
	I1128 02:41:14.459156  340927 main.go:141] libmachine: (addons-681229) Downloading /home/jenkins/minikube-integration/17671-333305/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17671-333305/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso...
	I1128 02:41:14.693237  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:14.693084  340949 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa...
	I1128 02:41:14.837074  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:14.836897  340949 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/addons-681229.rawdisk...
	I1128 02:41:14.837112  340927 main.go:141] libmachine: (addons-681229) DBG | Writing magic tar header
	I1128 02:41:14.837123  340927 main.go:141] libmachine: (addons-681229) DBG | Writing SSH key tar header
	I1128 02:41:14.837132  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:14.837022  340949 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229 ...
	I1128 02:41:14.837143  340927 main.go:141] libmachine: (addons-681229) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229
	I1128 02:41:14.837180  340927 main.go:141] libmachine: (addons-681229) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229 (perms=drwx------)
	I1128 02:41:14.837202  340927 main.go:141] libmachine: (addons-681229) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305/.minikube/machines (perms=drwxr-xr-x)
	I1128 02:41:14.837210  340927 main.go:141] libmachine: (addons-681229) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305/.minikube/machines
	I1128 02:41:14.837226  340927 main.go:141] libmachine: (addons-681229) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 02:41:14.837236  340927 main.go:141] libmachine: (addons-681229) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305
	I1128 02:41:14.837247  340927 main.go:141] libmachine: (addons-681229) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1128 02:41:14.837256  340927 main.go:141] libmachine: (addons-681229) DBG | Checking permissions on dir: /home/jenkins
	I1128 02:41:14.837263  340927 main.go:141] libmachine: (addons-681229) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305/.minikube (perms=drwxr-xr-x)
	I1128 02:41:14.837272  340927 main.go:141] libmachine: (addons-681229) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305 (perms=drwxrwxr-x)
	I1128 02:41:14.837289  340927 main.go:141] libmachine: (addons-681229) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1128 02:41:14.837298  340927 main.go:141] libmachine: (addons-681229) DBG | Checking permissions on dir: /home
	I1128 02:41:14.837309  340927 main.go:141] libmachine: (addons-681229) DBG | Skipping /home - not owner
	I1128 02:41:14.837323  340927 main.go:141] libmachine: (addons-681229) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1128 02:41:14.837334  340927 main.go:141] libmachine: (addons-681229) Creating domain...
	I1128 02:41:14.838404  340927 main.go:141] libmachine: (addons-681229) define libvirt domain using xml: 
	I1128 02:41:14.838433  340927 main.go:141] libmachine: (addons-681229) <domain type='kvm'>
	I1128 02:41:14.838446  340927 main.go:141] libmachine: (addons-681229)   <name>addons-681229</name>
	I1128 02:41:14.838453  340927 main.go:141] libmachine: (addons-681229)   <memory unit='MiB'>4000</memory>
	I1128 02:41:14.838490  340927 main.go:141] libmachine: (addons-681229)   <vcpu>2</vcpu>
	I1128 02:41:14.838494  340927 main.go:141] libmachine: (addons-681229)   <features>
	I1128 02:41:14.838503  340927 main.go:141] libmachine: (addons-681229)     <acpi/>
	I1128 02:41:14.838508  340927 main.go:141] libmachine: (addons-681229)     <apic/>
	I1128 02:41:14.838548  340927 main.go:141] libmachine: (addons-681229)     <pae/>
	I1128 02:41:14.838580  340927 main.go:141] libmachine: (addons-681229)     
	I1128 02:41:14.838597  340927 main.go:141] libmachine: (addons-681229)   </features>
	I1128 02:41:14.838613  340927 main.go:141] libmachine: (addons-681229)   <cpu mode='host-passthrough'>
	I1128 02:41:14.838626  340927 main.go:141] libmachine: (addons-681229)   
	I1128 02:41:14.838650  340927 main.go:141] libmachine: (addons-681229)   </cpu>
	I1128 02:41:14.838665  340927 main.go:141] libmachine: (addons-681229)   <os>
	I1128 02:41:14.838686  340927 main.go:141] libmachine: (addons-681229)     <type>hvm</type>
	I1128 02:41:14.838699  340927 main.go:141] libmachine: (addons-681229)     <boot dev='cdrom'/>
	I1128 02:41:14.838714  340927 main.go:141] libmachine: (addons-681229)     <boot dev='hd'/>
	I1128 02:41:14.838728  340927 main.go:141] libmachine: (addons-681229)     <bootmenu enable='no'/>
	I1128 02:41:14.838755  340927 main.go:141] libmachine: (addons-681229)   </os>
	I1128 02:41:14.838769  340927 main.go:141] libmachine: (addons-681229)   <devices>
	I1128 02:41:14.838782  340927 main.go:141] libmachine: (addons-681229)     <disk type='file' device='cdrom'>
	I1128 02:41:14.838805  340927 main.go:141] libmachine: (addons-681229)       <source file='/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/boot2docker.iso'/>
	I1128 02:41:14.838817  340927 main.go:141] libmachine: (addons-681229)       <target dev='hdc' bus='scsi'/>
	I1128 02:41:14.838827  340927 main.go:141] libmachine: (addons-681229)       <readonly/>
	I1128 02:41:14.838841  340927 main.go:141] libmachine: (addons-681229)     </disk>
	I1128 02:41:14.838856  340927 main.go:141] libmachine: (addons-681229)     <disk type='file' device='disk'>
	I1128 02:41:14.838870  340927 main.go:141] libmachine: (addons-681229)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1128 02:41:14.838890  340927 main.go:141] libmachine: (addons-681229)       <source file='/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/addons-681229.rawdisk'/>
	I1128 02:41:14.838900  340927 main.go:141] libmachine: (addons-681229)       <target dev='hda' bus='virtio'/>
	I1128 02:41:14.838922  340927 main.go:141] libmachine: (addons-681229)     </disk>
	I1128 02:41:14.838935  340927 main.go:141] libmachine: (addons-681229)     <interface type='network'>
	I1128 02:41:14.838943  340927 main.go:141] libmachine: (addons-681229)       <source network='mk-addons-681229'/>
	I1128 02:41:14.838948  340927 main.go:141] libmachine: (addons-681229)       <model type='virtio'/>
	I1128 02:41:14.838957  340927 main.go:141] libmachine: (addons-681229)     </interface>
	I1128 02:41:14.838963  340927 main.go:141] libmachine: (addons-681229)     <interface type='network'>
	I1128 02:41:14.838971  340927 main.go:141] libmachine: (addons-681229)       <source network='default'/>
	I1128 02:41:14.838976  340927 main.go:141] libmachine: (addons-681229)       <model type='virtio'/>
	I1128 02:41:14.838983  340927 main.go:141] libmachine: (addons-681229)     </interface>
	I1128 02:41:14.838997  340927 main.go:141] libmachine: (addons-681229)     <serial type='pty'>
	I1128 02:41:14.839006  340927 main.go:141] libmachine: (addons-681229)       <target port='0'/>
	I1128 02:41:14.839012  340927 main.go:141] libmachine: (addons-681229)     </serial>
	I1128 02:41:14.839044  340927 main.go:141] libmachine: (addons-681229)     <console type='pty'>
	I1128 02:41:14.839065  340927 main.go:141] libmachine: (addons-681229)       <target type='serial' port='0'/>
	I1128 02:41:14.839073  340927 main.go:141] libmachine: (addons-681229)     </console>
	I1128 02:41:14.839081  340927 main.go:141] libmachine: (addons-681229)     <rng model='virtio'>
	I1128 02:41:14.839088  340927 main.go:141] libmachine: (addons-681229)       <backend model='random'>/dev/random</backend>
	I1128 02:41:14.839096  340927 main.go:141] libmachine: (addons-681229)     </rng>
	I1128 02:41:14.839101  340927 main.go:141] libmachine: (addons-681229)     
	I1128 02:41:14.839108  340927 main.go:141] libmachine: (addons-681229)     
	I1128 02:41:14.839114  340927 main.go:141] libmachine: (addons-681229)   </devices>
	I1128 02:41:14.839121  340927 main.go:141] libmachine: (addons-681229) </domain>
	I1128 02:41:14.839139  340927 main.go:141] libmachine: (addons-681229) 
	I1128 02:41:14.845109  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:a3:e9:81 in network default
	I1128 02:41:14.845685  340927 main.go:141] libmachine: (addons-681229) Ensuring networks are active...
	I1128 02:41:14.845707  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:14.846372  340927 main.go:141] libmachine: (addons-681229) Ensuring network default is active
	I1128 02:41:14.846675  340927 main.go:141] libmachine: (addons-681229) Ensuring network mk-addons-681229 is active
	I1128 02:41:14.847058  340927 main.go:141] libmachine: (addons-681229) Getting domain xml...
	I1128 02:41:14.847617  340927 main.go:141] libmachine: (addons-681229) Creating domain...
	I1128 02:41:16.295087  340927 main.go:141] libmachine: (addons-681229) Waiting to get IP...
	I1128 02:41:16.295794  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:16.296192  340927 main.go:141] libmachine: (addons-681229) DBG | unable to find current IP address of domain addons-681229 in network mk-addons-681229
	I1128 02:41:16.296312  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:16.296210  340949 retry.go:31] will retry after 253.538696ms: waiting for machine to come up
	I1128 02:41:16.551816  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:16.552269  340927 main.go:141] libmachine: (addons-681229) DBG | unable to find current IP address of domain addons-681229 in network mk-addons-681229
	I1128 02:41:16.552296  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:16.552225  340949 retry.go:31] will retry after 255.136415ms: waiting for machine to come up
	I1128 02:41:16.809284  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:16.809850  340927 main.go:141] libmachine: (addons-681229) DBG | unable to find current IP address of domain addons-681229 in network mk-addons-681229
	I1128 02:41:16.809875  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:16.809795  340949 retry.go:31] will retry after 342.769038ms: waiting for machine to come up
	I1128 02:41:17.153827  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:17.154340  340927 main.go:141] libmachine: (addons-681229) DBG | unable to find current IP address of domain addons-681229 in network mk-addons-681229
	I1128 02:41:17.154366  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:17.154276  340949 retry.go:31] will retry after 442.02175ms: waiting for machine to come up
	I1128 02:41:17.597935  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:17.598361  340927 main.go:141] libmachine: (addons-681229) DBG | unable to find current IP address of domain addons-681229 in network mk-addons-681229
	I1128 02:41:17.598408  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:17.598310  340949 retry.go:31] will retry after 733.428644ms: waiting for machine to come up
	I1128 02:41:18.333167  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:18.333729  340927 main.go:141] libmachine: (addons-681229) DBG | unable to find current IP address of domain addons-681229 in network mk-addons-681229
	I1128 02:41:18.333764  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:18.333683  340949 retry.go:31] will retry after 580.162875ms: waiting for machine to come up
	I1128 02:41:18.915364  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:18.915765  340927 main.go:141] libmachine: (addons-681229) DBG | unable to find current IP address of domain addons-681229 in network mk-addons-681229
	I1128 02:41:18.915810  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:18.915701  340949 retry.go:31] will retry after 1.020185294s: waiting for machine to come up
	I1128 02:41:19.937071  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:19.937531  340927 main.go:141] libmachine: (addons-681229) DBG | unable to find current IP address of domain addons-681229 in network mk-addons-681229
	I1128 02:41:19.937565  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:19.937451  340949 retry.go:31] will retry after 897.525964ms: waiting for machine to come up
	I1128 02:41:20.836598  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:20.837046  340927 main.go:141] libmachine: (addons-681229) DBG | unable to find current IP address of domain addons-681229 in network mk-addons-681229
	I1128 02:41:20.837079  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:20.836992  340949 retry.go:31] will retry after 1.224511258s: waiting for machine to come up
	I1128 02:41:22.063373  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:22.063898  340927 main.go:141] libmachine: (addons-681229) DBG | unable to find current IP address of domain addons-681229 in network mk-addons-681229
	I1128 02:41:22.063929  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:22.063871  340949 retry.go:31] will retry after 2.255865834s: waiting for machine to come up
	I1128 02:41:24.321957  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:24.322444  340927 main.go:141] libmachine: (addons-681229) DBG | unable to find current IP address of domain addons-681229 in network mk-addons-681229
	I1128 02:41:24.322484  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:24.322373  340949 retry.go:31] will retry after 2.760353127s: waiting for machine to come up
	I1128 02:41:27.083974  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:27.084391  340927 main.go:141] libmachine: (addons-681229) DBG | unable to find current IP address of domain addons-681229 in network mk-addons-681229
	I1128 02:41:27.084426  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:27.084303  340949 retry.go:31] will retry after 3.056474243s: waiting for machine to come up
	I1128 02:41:30.141984  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:30.142440  340927 main.go:141] libmachine: (addons-681229) DBG | unable to find current IP address of domain addons-681229 in network mk-addons-681229
	I1128 02:41:30.142469  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:30.142386  340949 retry.go:31] will retry after 3.131100071s: waiting for machine to come up
	I1128 02:41:33.277705  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:33.278042  340927 main.go:141] libmachine: (addons-681229) DBG | unable to find current IP address of domain addons-681229 in network mk-addons-681229
	I1128 02:41:33.278068  340927 main.go:141] libmachine: (addons-681229) DBG | I1128 02:41:33.277981  340949 retry.go:31] will retry after 4.484684712s: waiting for machine to come up
	I1128 02:41:37.767377  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:37.767880  340927 main.go:141] libmachine: (addons-681229) Found IP for machine: 192.168.39.100
	I1128 02:41:37.767909  340927 main.go:141] libmachine: (addons-681229) Reserving static IP address...
	I1128 02:41:37.767924  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has current primary IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:37.768287  340927 main.go:141] libmachine: (addons-681229) DBG | unable to find host DHCP lease matching {name: "addons-681229", mac: "52:54:00:dd:03:de", ip: "192.168.39.100"} in network mk-addons-681229
	I1128 02:41:37.840831  340927 main.go:141] libmachine: (addons-681229) DBG | Getting to WaitForSSH function...
	I1128 02:41:37.840871  340927 main.go:141] libmachine: (addons-681229) Reserved static IP address: 192.168.39.100
	I1128 02:41:37.840903  340927 main.go:141] libmachine: (addons-681229) Waiting for SSH to be available...
	I1128 02:41:37.843459  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:37.843905  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:37.843938  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:37.844173  340927 main.go:141] libmachine: (addons-681229) DBG | Using SSH client type: external
	I1128 02:41:37.844208  340927 main.go:141] libmachine: (addons-681229) DBG | Using SSH private key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa (-rw-------)
	I1128 02:41:37.844247  340927 main.go:141] libmachine: (addons-681229) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 02:41:37.844261  340927 main.go:141] libmachine: (addons-681229) DBG | About to run SSH command:
	I1128 02:41:37.844274  340927 main.go:141] libmachine: (addons-681229) DBG | exit 0
	I1128 02:41:37.936947  340927 main.go:141] libmachine: (addons-681229) DBG | SSH cmd err, output: <nil>: 
	I1128 02:41:37.937175  340927 main.go:141] libmachine: (addons-681229) KVM machine creation complete!
	I1128 02:41:37.937571  340927 main.go:141] libmachine: (addons-681229) Calling .GetConfigRaw
	I1128 02:41:37.938114  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:41:37.938308  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:41:37.938480  340927 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1128 02:41:37.938498  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:41:37.939704  340927 main.go:141] libmachine: Detecting operating system of created instance...
	I1128 02:41:37.939718  340927 main.go:141] libmachine: Waiting for SSH to be available...
	I1128 02:41:37.939724  340927 main.go:141] libmachine: Getting to WaitForSSH function...
	I1128 02:41:37.939731  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:41:37.942229  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:37.942658  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:37.942685  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:37.942813  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:41:37.942978  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:37.943122  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:37.943251  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:41:37.943406  340927 main.go:141] libmachine: Using SSH client type: native
	I1128 02:41:37.943788  340927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1128 02:41:37.943803  340927 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1128 02:41:38.048158  340927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 02:41:38.048192  340927 main.go:141] libmachine: Detecting the provisioner...
	I1128 02:41:38.048202  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:41:38.051244  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:38.051601  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:38.051636  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:38.051821  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:41:38.052048  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:38.052238  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:38.052428  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:41:38.052602  340927 main.go:141] libmachine: Using SSH client type: native
	I1128 02:41:38.052963  340927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1128 02:41:38.052976  340927 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1128 02:41:38.161843  340927 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g21ec34a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1128 02:41:38.161955  340927 main.go:141] libmachine: found compatible host: buildroot
	I1128 02:41:38.161966  340927 main.go:141] libmachine: Provisioning with buildroot...
	I1128 02:41:38.161975  340927 main.go:141] libmachine: (addons-681229) Calling .GetMachineName
	I1128 02:41:38.162287  340927 buildroot.go:166] provisioning hostname "addons-681229"
	I1128 02:41:38.162321  340927 main.go:141] libmachine: (addons-681229) Calling .GetMachineName
	I1128 02:41:38.162532  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:41:38.165278  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:38.165681  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:38.165717  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:38.165846  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:41:38.166084  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:38.166322  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:38.166470  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:41:38.166663  340927 main.go:141] libmachine: Using SSH client type: native
	I1128 02:41:38.167057  340927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1128 02:41:38.167079  340927 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-681229 && echo "addons-681229" | sudo tee /etc/hostname
	I1128 02:41:38.290414  340927 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-681229
	
	I1128 02:41:38.290441  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:41:38.293490  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:38.293816  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:38.293848  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:38.294010  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:41:38.294268  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:38.294442  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:38.294586  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:41:38.294751  340927 main.go:141] libmachine: Using SSH client type: native
	I1128 02:41:38.295137  340927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1128 02:41:38.295155  340927 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-681229' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-681229/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-681229' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 02:41:38.409939  340927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 02:41:38.409972  340927 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 02:41:38.410016  340927 buildroot.go:174] setting up certificates
	I1128 02:41:38.410031  340927 provision.go:83] configureAuth start
	I1128 02:41:38.410050  340927 main.go:141] libmachine: (addons-681229) Calling .GetMachineName
	I1128 02:41:38.410393  340927 main.go:141] libmachine: (addons-681229) Calling .GetIP
	I1128 02:41:38.412792  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:38.413199  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:38.413233  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:38.413467  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:41:38.415583  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:38.415833  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:38.415873  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:38.415965  340927 provision.go:138] copyHostCerts
	I1128 02:41:38.416067  340927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 02:41:38.416225  340927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 02:41:38.416306  340927 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 02:41:38.416365  340927 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.addons-681229 san=[192.168.39.100 192.168.39.100 localhost 127.0.0.1 minikube addons-681229]
	I1128 02:41:38.591711  340927 provision.go:172] copyRemoteCerts
	I1128 02:41:38.591800  340927 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 02:41:38.591830  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:41:38.594455  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:38.594769  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:38.594794  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:38.595025  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:41:38.595261  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:38.595447  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:41:38.595600  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:41:38.677739  340927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 02:41:38.699805  340927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1128 02:41:38.722528  340927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 02:41:38.744958  340927 provision.go:86] duration metric: configureAuth took 334.90734ms
	I1128 02:41:38.744985  340927 buildroot.go:189] setting minikube options for container-runtime
	I1128 02:41:38.745223  340927 config.go:182] Loaded profile config "addons-681229": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 02:41:38.745426  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:41:38.748005  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:38.748343  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:38.748384  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:38.748509  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:41:38.748727  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:38.748926  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:38.749080  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:41:38.749245  340927 main.go:141] libmachine: Using SSH client type: native
	I1128 02:41:38.749640  340927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1128 02:41:38.749659  340927 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 02:41:39.044057  340927 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 02:41:39.044098  340927 main.go:141] libmachine: Checking connection to Docker...
	I1128 02:41:39.044136  340927 main.go:141] libmachine: (addons-681229) Calling .GetURL
	I1128 02:41:39.045585  340927 main.go:141] libmachine: (addons-681229) DBG | Using libvirt version 6000000
	I1128 02:41:39.047543  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:39.047912  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:39.047942  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:39.048170  340927 main.go:141] libmachine: Docker is up and running!
	I1128 02:41:39.048201  340927 main.go:141] libmachine: Reticulating splines...
	I1128 02:41:39.048210  340927 client.go:171] LocalClient.Create took 24.886359809s
	I1128 02:41:39.048242  340927 start.go:167] duration metric: libmachine.API.Create for "addons-681229" took 24.886424408s
	I1128 02:41:39.048259  340927 start.go:300] post-start starting for "addons-681229" (driver="kvm2")
	I1128 02:41:39.048275  340927 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 02:41:39.048305  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:41:39.048617  340927 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 02:41:39.048652  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:41:39.050906  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:39.051232  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:39.051257  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:39.051413  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:41:39.051627  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:39.051793  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:41:39.051970  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:41:39.135058  340927 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 02:41:39.139053  340927 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 02:41:39.139081  340927 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 02:41:39.139151  340927 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 02:41:39.139174  340927 start.go:303] post-start completed in 90.906239ms
	I1128 02:41:39.139209  340927 main.go:141] libmachine: (addons-681229) Calling .GetConfigRaw
	I1128 02:41:39.139843  340927 main.go:141] libmachine: (addons-681229) Calling .GetIP
	I1128 02:41:39.142385  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:39.142745  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:39.142767  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:39.143092  340927 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/config.json ...
	I1128 02:41:39.143269  340927 start.go:128] duration metric: createHost completed in 25.00118652s
	I1128 02:41:39.143292  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:41:39.145390  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:39.145673  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:39.145707  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:39.145811  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:41:39.145975  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:39.146068  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:39.146147  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:41:39.146246  340927 main.go:141] libmachine: Using SSH client type: native
	I1128 02:41:39.146549  340927 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I1128 02:41:39.146560  340927 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 02:41:39.253810  340927 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701139299.234536097
	
	I1128 02:41:39.253841  340927 fix.go:206] guest clock: 1701139299.234536097
	I1128 02:41:39.253849  340927 fix.go:219] Guest: 2023-11-28 02:41:39.234536097 +0000 UTC Remote: 2023-11-28 02:41:39.143280549 +0000 UTC m=+25.123096528 (delta=91.255548ms)
	I1128 02:41:39.253875  340927 fix.go:190] guest clock delta is within tolerance: 91.255548ms
	I1128 02:41:39.253884  340927 start.go:83] releasing machines lock for "addons-681229", held for 25.11193908s
	I1128 02:41:39.253914  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:41:39.254166  340927 main.go:141] libmachine: (addons-681229) Calling .GetIP
	I1128 02:41:39.256877  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:39.257241  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:39.257274  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:39.257441  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:41:39.257943  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:41:39.258113  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:41:39.258213  340927 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 02:41:39.258248  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:41:39.258361  340927 ssh_runner.go:195] Run: cat /version.json
	I1128 02:41:39.258388  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:41:39.260820  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:39.261120  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:39.261196  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:39.261223  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:39.261449  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:41:39.261571  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:39.261601  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:39.261649  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:39.261742  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:41:39.261835  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:41:39.261920  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:41:39.261980  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:41:39.262029  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:41:39.262148  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:41:39.365355  340927 ssh_runner.go:195] Run: systemctl --version
	I1128 02:41:39.371149  340927 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 02:41:39.536078  340927 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 02:41:39.542044  340927 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 02:41:39.542125  340927 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 02:41:39.557951  340927 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 02:41:39.557981  340927 start.go:472] detecting cgroup driver to use...
	I1128 02:41:39.558088  340927 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 02:41:39.571966  340927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 02:41:39.586804  340927 docker.go:203] disabling cri-docker service (if available) ...
	I1128 02:41:39.586882  340927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 02:41:39.602292  340927 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 02:41:39.617542  340927 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 02:41:39.726428  340927 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 02:41:39.850676  340927 docker.go:219] disabling docker service ...
	I1128 02:41:39.850759  340927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 02:41:39.864476  340927 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 02:41:39.876767  340927 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 02:41:39.990630  340927 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 02:41:40.108249  340927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 02:41:40.121182  340927 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 02:41:40.138216  340927 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 02:41:40.138308  340927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 02:41:40.148294  340927 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 02:41:40.148359  340927 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 02:41:40.158324  340927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 02:41:40.168239  340927 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 02:41:40.178177  340927 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 02:41:40.187612  340927 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 02:41:40.196641  340927 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 02:41:40.196711  340927 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 02:41:40.209943  340927 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 02:41:40.219622  340927 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 02:41:40.332797  340927 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 02:41:40.510900  340927 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 02:41:40.511001  340927 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 02:41:40.517335  340927 start.go:540] Will wait 60s for crictl version
	I1128 02:41:40.517434  340927 ssh_runner.go:195] Run: which crictl
	I1128 02:41:40.520968  340927 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 02:41:40.565711  340927 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 02:41:40.565826  340927 ssh_runner.go:195] Run: crio --version
	I1128 02:41:40.615295  340927 ssh_runner.go:195] Run: crio --version
	I1128 02:41:40.667987  340927 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 02:41:40.669408  340927 main.go:141] libmachine: (addons-681229) Calling .GetIP
	I1128 02:41:40.671966  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:40.672321  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:41:40.672388  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:41:40.672513  340927 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 02:41:40.676561  340927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 02:41:40.688262  340927 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 02:41:40.688343  340927 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 02:41:40.721995  340927 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 02:41:40.722086  340927 ssh_runner.go:195] Run: which lz4
	I1128 02:41:40.726004  340927 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 02:41:40.730090  340927 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 02:41:40.730120  340927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 02:41:42.499582  340927 crio.go:444] Took 1.773608 seconds to copy over tarball
	I1128 02:41:42.499676  340927 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 02:41:45.460739  340927 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.96103186s)
	I1128 02:41:45.460774  340927 crio.go:451] Took 2.961161 seconds to extract the tarball
	I1128 02:41:45.460785  340927 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 02:41:45.503806  340927 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 02:41:45.577328  340927 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 02:41:45.577358  340927 cache_images.go:84] Images are preloaded, skipping loading
	I1128 02:41:45.577426  340927 ssh_runner.go:195] Run: crio config
	I1128 02:41:45.637241  340927 cni.go:84] Creating CNI manager for ""
	I1128 02:41:45.637267  340927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 02:41:45.637290  340927 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 02:41:45.637313  340927 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-681229 NodeName:addons-681229 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 02:41:45.637476  340927 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-681229"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 02:41:45.637593  340927 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-681229 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-681229 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 02:41:45.637658  340927 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 02:41:45.647472  340927 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 02:41:45.647560  340927 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 02:41:45.656822  340927 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1128 02:41:45.674784  340927 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 02:41:45.692119  340927 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1128 02:41:45.708839  340927 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I1128 02:41:45.712585  340927 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 02:41:45.724951  340927 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229 for IP: 192.168.39.100
	I1128 02:41:45.724991  340927 certs.go:190] acquiring lock for shared ca certs: {Name:mk57c0483467fb0022a439f1b546194ca653d1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:41:45.725164  340927 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key
	I1128 02:41:45.828960  340927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt ...
	I1128 02:41:45.828993  340927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt: {Name:mk50e92781561c5f21e2463eee3ef7559181150a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:41:45.829200  340927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key ...
	I1128 02:41:45.829216  340927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key: {Name:mk6cc4edb46bc965ee3c3ae4fef134bf8f16fdbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:41:45.829336  340927 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key
	I1128 02:41:45.959512  340927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt ...
	I1128 02:41:45.959543  340927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt: {Name:mk5ac166a2e48fb37c2035ecf1dde5774d34ae4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:41:45.959733  340927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key ...
	I1128 02:41:45.959753  340927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key: {Name:mk640eb624e3165ebedda5d2bff699c84134cafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:41:45.959936  340927 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.key
	I1128 02:41:45.959953  340927 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt with IP's: []
	I1128 02:41:46.072849  340927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt ...
	I1128 02:41:46.072896  340927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: {Name:mka99e8d9b7888b217cc7abe90840f673b9450e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:41:46.073109  340927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.key ...
	I1128 02:41:46.073126  340927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.key: {Name:mkc7668caaf1a0718ff5fb752646bbb2f7fc5a1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:41:46.073230  340927 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/apiserver.key.3c12ef50
	I1128 02:41:46.073257  340927 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/apiserver.crt.3c12ef50 with IP's: [192.168.39.100 10.96.0.1 127.0.0.1 10.0.0.1]
	I1128 02:41:46.285381  340927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/apiserver.crt.3c12ef50 ...
	I1128 02:41:46.285413  340927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/apiserver.crt.3c12ef50: {Name:mk96241b8f10980b7c85cf450be08fc98c851e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:41:46.285612  340927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/apiserver.key.3c12ef50 ...
	I1128 02:41:46.285633  340927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/apiserver.key.3c12ef50: {Name:mk670fd2c6d73491a00372711d7f0e16bf25fa93 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:41:46.285751  340927 certs.go:337] copying /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/apiserver.crt.3c12ef50 -> /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/apiserver.crt
	I1128 02:41:46.285858  340927 certs.go:341] copying /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/apiserver.key.3c12ef50 -> /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/apiserver.key
	I1128 02:41:46.285930  340927 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/proxy-client.key
	I1128 02:41:46.285952  340927 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/proxy-client.crt with IP's: []
	I1128 02:41:46.412461  340927 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/proxy-client.crt ...
	I1128 02:41:46.412502  340927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/proxy-client.crt: {Name:mkdd837759c464e421bacb35fd3d2521f61805ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:41:46.412709  340927 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/proxy-client.key ...
	I1128 02:41:46.412732  340927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/proxy-client.key: {Name:mkf7496187ffd565e2a4cbe33959eac14c8b1879 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:41:46.412978  340927 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 02:41:46.413027  340927 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem (1078 bytes)
	I1128 02:41:46.413055  340927 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem (1123 bytes)
	I1128 02:41:46.413079  340927 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem (1675 bytes)
	I1128 02:41:46.413740  340927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 02:41:46.437271  340927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1128 02:41:46.458923  340927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 02:41:46.480561  340927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 02:41:46.502070  340927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 02:41:46.523900  340927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 02:41:46.545408  340927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 02:41:46.567303  340927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 02:41:46.589651  340927 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 02:41:46.612845  340927 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 02:41:46.629223  340927 ssh_runner.go:195] Run: openssl version
	I1128 02:41:46.634778  340927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 02:41:46.645387  340927 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 02:41:46.649678  340927 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 02:41:46.649742  340927 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 02:41:46.655176  340927 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 02:41:46.665552  340927 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 02:41:46.669671  340927 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 02:41:46.669737  340927 kubeadm.go:404] StartCluster: {Name:addons-681229 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-681229 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 02:41:46.669857  340927 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 02:41:46.669919  340927 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 02:41:46.707398  340927 cri.go:89] found id: ""
	I1128 02:41:46.707475  340927 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 02:41:46.717026  340927 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 02:41:46.726416  340927 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 02:41:46.735688  340927 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 02:41:46.735754  340927 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 02:41:46.922463  340927 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 02:41:58.733526  340927 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 02:41:58.733693  340927 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 02:41:58.733887  340927 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 02:41:58.734022  340927 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 02:41:58.734159  340927 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 02:41:58.734249  340927 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 02:41:58.736008  340927 out.go:204]   - Generating certificates and keys ...
	I1128 02:41:58.736106  340927 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 02:41:58.736162  340927 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 02:41:58.736218  340927 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1128 02:41:58.736274  340927 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1128 02:41:58.736355  340927 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1128 02:41:58.736435  340927 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1128 02:41:58.736506  340927 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1128 02:41:58.736637  340927 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-681229 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I1128 02:41:58.736705  340927 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1128 02:41:58.736824  340927 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-681229 localhost] and IPs [192.168.39.100 127.0.0.1 ::1]
	I1128 02:41:58.736921  340927 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1128 02:41:58.736996  340927 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1128 02:41:58.737038  340927 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1128 02:41:58.737097  340927 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 02:41:58.737153  340927 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 02:41:58.737229  340927 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 02:41:58.737303  340927 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 02:41:58.737372  340927 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 02:41:58.737460  340927 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 02:41:58.737552  340927 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 02:41:58.739177  340927 out.go:204]   - Booting up control plane ...
	I1128 02:41:58.739288  340927 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 02:41:58.739382  340927 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 02:41:58.739469  340927 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 02:41:58.739552  340927 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 02:41:58.739630  340927 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 02:41:58.739660  340927 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 02:41:58.739836  340927 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 02:41:58.739952  340927 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.503140 seconds
	I1128 02:41:58.740085  340927 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 02:41:58.740229  340927 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 02:41:58.740295  340927 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 02:41:58.740501  340927 kubeadm.go:322] [mark-control-plane] Marking the node addons-681229 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 02:41:58.740584  340927 kubeadm.go:322] [bootstrap-token] Using token: l8qtdf.qyppqin13rccylum
	I1128 02:41:58.742249  340927 out.go:204]   - Configuring RBAC rules ...
	I1128 02:41:58.742382  340927 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 02:41:58.742508  340927 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 02:41:58.742721  340927 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 02:41:58.742897  340927 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 02:41:58.743054  340927 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 02:41:58.743201  340927 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 02:41:58.743357  340927 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 02:41:58.743418  340927 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 02:41:58.743473  340927 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 02:41:58.743482  340927 kubeadm.go:322] 
	I1128 02:41:58.743554  340927 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 02:41:58.743563  340927 kubeadm.go:322] 
	I1128 02:41:58.743657  340927 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 02:41:58.743666  340927 kubeadm.go:322] 
	I1128 02:41:58.743695  340927 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 02:41:58.743768  340927 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 02:41:58.743840  340927 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 02:41:58.743849  340927 kubeadm.go:322] 
	I1128 02:41:58.743918  340927 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 02:41:58.743931  340927 kubeadm.go:322] 
	I1128 02:41:58.744024  340927 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 02:41:58.744035  340927 kubeadm.go:322] 
	I1128 02:41:58.744088  340927 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 02:41:58.744167  340927 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 02:41:58.744223  340927 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 02:41:58.744232  340927 kubeadm.go:322] 
	I1128 02:41:58.744318  340927 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 02:41:58.744407  340927 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 02:41:58.744431  340927 kubeadm.go:322] 
	I1128 02:41:58.744558  340927 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token l8qtdf.qyppqin13rccylum \
	I1128 02:41:58.744684  340927 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 02:41:58.744717  340927 kubeadm.go:322] 	--control-plane 
	I1128 02:41:58.744732  340927 kubeadm.go:322] 
	I1128 02:41:58.744876  340927 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 02:41:58.744910  340927 kubeadm.go:322] 
	I1128 02:41:58.745008  340927 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token l8qtdf.qyppqin13rccylum \
	I1128 02:41:58.745148  340927 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 02:41:58.745180  340927 cni.go:84] Creating CNI manager for ""
	I1128 02:41:58.745194  340927 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 02:41:58.747883  340927 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 02:41:58.749459  340927 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 02:41:58.762826  340927 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 02:41:58.849395  340927 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 02:41:58.849529  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:41:58.849566  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=addons-681229 minikube.k8s.io/updated_at=2023_11_28T02_41_58_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:41:58.892376  340927 ops.go:34] apiserver oom_adj: -16
	I1128 02:41:59.086130  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:41:59.169346  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:41:59.763235  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:00.263561  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:00.762632  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:01.262723  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:01.763082  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:02.263602  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:02.762987  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:03.263455  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:03.762989  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:04.263106  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:04.763421  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:05.262698  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:05.762724  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:06.263013  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:06.762980  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:07.263026  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:07.763002  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:08.263594  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:08.763074  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:09.263406  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:09.763069  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:10.263685  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:10.763546  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:11.263303  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:11.763614  340927 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:42:11.893612  340927 kubeadm.go:1081] duration metric: took 13.044142931s to wait for elevateKubeSystemPrivileges.
	I1128 02:42:11.893643  340927 kubeadm.go:406] StartCluster complete in 25.223914997s
	I1128 02:42:11.893668  340927 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:42:11.893800  340927 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 02:42:11.894481  340927 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:42:11.894712  340927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 02:42:11.894888  340927 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true]
	I1128 02:42:11.895001  340927 addons.go:69] Setting default-storageclass=true in profile "addons-681229"
	I1128 02:42:11.895043  340927 addons.go:69] Setting ingress-dns=true in profile "addons-681229"
	I1128 02:42:11.895065  340927 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-681229"
	I1128 02:42:11.895084  340927 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-681229"
	I1128 02:42:11.895092  340927 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-681229"
	I1128 02:42:11.895009  340927 addons.go:69] Setting volumesnapshots=true in profile "addons-681229"
	I1128 02:42:11.895150  340927 addons.go:231] Setting addon volumesnapshots=true in "addons-681229"
	I1128 02:42:11.895025  340927 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-681229"
	I1128 02:42:11.895231  340927 host.go:66] Checking if "addons-681229" exists ...
	I1128 02:42:11.895075  340927 addons.go:231] Setting addon ingress-dns=true in "addons-681229"
	I1128 02:42:11.895306  340927 host.go:66] Checking if "addons-681229" exists ...
	I1128 02:42:11.895232  340927 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-681229"
	I1128 02:42:11.895428  340927 host.go:66] Checking if "addons-681229" exists ...
	I1128 02:42:11.895020  340927 addons.go:69] Setting metrics-server=true in profile "addons-681229"
	I1128 02:42:11.895664  340927 addons.go:231] Setting addon metrics-server=true in "addons-681229"
	I1128 02:42:11.895671  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.895671  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.895687  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.895672  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.895027  340927 addons.go:69] Setting helm-tiller=true in profile "addons-681229"
	I1128 02:42:11.895708  340927 host.go:66] Checking if "addons-681229" exists ...
	I1128 02:42:11.895717  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.895726  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.895792  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.895790  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.895808  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.895814  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.895717  340927 addons.go:231] Setting addon helm-tiller=true in "addons-681229"
	I1128 02:42:11.895018  340927 addons.go:69] Setting gcp-auth=true in profile "addons-681229"
	I1128 02:42:11.895902  340927 host.go:66] Checking if "addons-681229" exists ...
	I1128 02:42:11.895946  340927 mustload.go:65] Loading cluster: addons-681229
	I1128 02:42:11.896046  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.896075  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.896136  340927 config.go:182] Loaded profile config "addons-681229": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 02:42:11.896213  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.895032  340927 addons.go:69] Setting ingress=true in profile "addons-681229"
	I1128 02:42:11.896232  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.896247  340927 addons.go:231] Setting addon ingress=true in "addons-681229"
	I1128 02:42:11.896297  340927 host.go:66] Checking if "addons-681229" exists ...
	I1128 02:42:11.896499  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.896532  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.895052  340927 addons.go:69] Setting storage-provisioner=true in profile "addons-681229"
	I1128 02:42:11.896610  340927 addons.go:231] Setting addon storage-provisioner=true in "addons-681229"
	I1128 02:42:11.895035  340927 addons.go:69] Setting registry=true in profile "addons-681229"
	I1128 02:42:11.896650  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.896657  340927 host.go:66] Checking if "addons-681229" exists ...
	I1128 02:42:11.896666  340927 addons.go:231] Setting addon registry=true in "addons-681229"
	I1128 02:42:11.896677  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.895057  340927 addons.go:69] Setting inspektor-gadget=true in profile "addons-681229"
	I1128 02:42:11.896724  340927 addons.go:231] Setting addon inspektor-gadget=true in "addons-681229"
	I1128 02:42:11.896724  340927 host.go:66] Checking if "addons-681229" exists ...
	I1128 02:42:11.895022  340927 addons.go:69] Setting cloud-spanner=true in profile "addons-681229"
	I1128 02:42:11.897240  340927 addons.go:231] Setting addon cloud-spanner=true in "addons-681229"
	I1128 02:42:11.897293  340927 host.go:66] Checking if "addons-681229" exists ...
	I1128 02:42:11.897675  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.897714  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.895029  340927 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-681229"
	I1128 02:42:11.901040  340927 addons.go:231] Setting addon nvidia-device-plugin=true in "addons-681229"
	I1128 02:42:11.901097  340927 host.go:66] Checking if "addons-681229" exists ...
	I1128 02:42:11.895000  340927 config.go:182] Loaded profile config "addons-681229": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 02:42:11.905163  340927 host.go:66] Checking if "addons-681229" exists ...
	I1128 02:42:11.905557  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.905645  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.915281  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43943
	I1128 02:42:11.915863  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.916261  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I1128 02:42:11.916581  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.916606  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.916685  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.917162  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.917191  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.917567  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.917834  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40275
	I1128 02:42:11.918131  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.918160  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.918327  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.918369  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.918374  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.918409  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.918589  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.918739  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.918771  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.919188  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.919217  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.919646  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.920235  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.920274  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.921252  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.921330  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37299
	I1128 02:42:11.922064  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.922111  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.922303  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.922948  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.922982  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.923506  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.923765  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:11.927577  340927 addons.go:231] Setting addon storage-provisioner-rancher=true in "addons-681229"
	I1128 02:42:11.927631  340927 host.go:66] Checking if "addons-681229" exists ...
	I1128 02:42:11.928033  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.928090  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.928284  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35847
	I1128 02:42:11.934357  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.934913  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.934938  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.935273  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.935753  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.935791  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.936416  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43791
	I1128 02:42:11.937084  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.938160  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.938185  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.938933  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.939514  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.939539  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.940190  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33431
	I1128 02:42:11.940684  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.941153  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.941172  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.941559  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.941732  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:11.943723  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:42:11.946223  340927 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I1128 02:42:11.947867  340927 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 02:42:11.947896  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 02:42:11.947921  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:42:11.951641  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:11.951649  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I1128 02:42:11.952026  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:42:11.952051  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:11.952350  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.952415  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:42:11.952600  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:42:11.952791  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:42:11.952963  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.952961  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:42:11.952986  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.953360  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.953532  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:11.957972  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42907
	I1128 02:42:11.958433  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.958884  340927 addons.go:231] Setting addon default-storageclass=true in "addons-681229"
	I1128 02:42:11.958931  340927 host.go:66] Checking if "addons-681229" exists ...
	I1128 02:42:11.959039  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.959059  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.959379  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.959429  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.959500  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.959823  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:11.959893  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44407
	I1128 02:42:11.961607  340927 host.go:66] Checking if "addons-681229" exists ...
	I1128 02:42:11.962014  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.962052  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.962807  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41129
	I1128 02:42:11.963215  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43721
	I1128 02:42:11.963388  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.963942  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.964444  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42245
	I1128 02:42:11.964766  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.964783  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.964933  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.964945  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.965185  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.965332  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.965397  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.965930  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.965982  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.966406  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:11.966476  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.966996  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.967026  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.967072  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.967096  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.967487  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.968064  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.968123  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.968282  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44365
	I1128 02:42:11.968314  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.968616  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:42:11.970777  340927 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I1128 02:42:11.969048  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.969177  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.969827  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39601
	I1128 02:42:11.972373  340927 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1128 02:42:11.972395  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1128 02:42:11.972417  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:42:11.972373  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.973173  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.973191  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.973775  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.974347  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.974377  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.974601  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.974708  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46075
	I1128 02:42:11.974872  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I1128 02:42:11.975172  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.975191  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.975352  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.975546  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.976135  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.976190  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.976388  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.976547  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.976567  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.976926  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.977469  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.977515  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.977794  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.977813  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.978208  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.978264  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:11.978425  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:11.978772  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:42:11.978801  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:11.979014  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:42:11.979217  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:42:11.979407  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:42:11.979545  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:42:11.979982  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:42:11.982033  340927 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I1128 02:42:11.982800  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43149
	I1128 02:42:11.983860  340927 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I1128 02:42:11.983876  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I1128 02:42:11.983895  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:42:11.984507  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.985276  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.985297  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.985789  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.986450  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:11.986497  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:11.987679  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:11.988121  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:42:11.988171  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:11.988375  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:42:11.988573  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:42:11.988758  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:42:11.988927  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:42:11.992633  340927 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-681229" context rescaled to 1 replicas
	I1128 02:42:11.992671  340927 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 02:42:11.994585  340927 out.go:177] * Verifying Kubernetes components...
	I1128 02:42:11.996465  340927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 02:42:11.994728  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45691
	I1128 02:42:11.994909  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
	I1128 02:42:11.997245  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43067
	I1128 02:42:11.997854  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.997892  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.997965  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:11.998608  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.998628  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.998757  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.998769  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.998787  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:11.998807  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:11.999167  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.999223  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:11.999456  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:11.999657  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:12.000460  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:12.001273  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:12.001321  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:12.001569  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42067
	I1128 02:42:12.001857  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:42:12.002032  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:12.004007  340927 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1128 02:42:12.002612  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:42:12.002730  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:12.002939  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45901
	I1128 02:42:12.005574  340927 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1128 02:42:12.005591  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1128 02:42:12.005592  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:12.005610  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:42:12.006246  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:12.007955  340927 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I1128 02:42:12.006572  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:12.009161  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:12.010745  340927 out.go:177]   - Using image docker.io/registry:2.8.3
	I1128 02:42:12.009854  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:12.012184  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:12.012197  340927 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I1128 02:42:12.012214  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I1128 02:42:12.009887  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.012234  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:42:12.011033  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:42:12.012254  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:42:12.012278  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.011769  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:42:12.012234  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42477
	I1128 02:42:12.014504  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:12.013116  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:42:12.014512  340927 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I1128 02:42:12.016108  340927 addons.go:423] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1128 02:42:12.016125  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:42:12.016132  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1128 02:42:12.016152  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:42:12.013779  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:12.016111  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:12.016235  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:12.014004  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38835
	I1128 02:42:12.016313  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:42:12.014921  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:42:12.015541  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.016401  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:42:12.016423  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.016449  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:12.016855  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:12.016989  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:42:12.017514  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:42:12.017755  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:42:12.018034  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42775
	I1128 02:42:12.018296  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:12.018310  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:12.018336  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:42:12.020315  340927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1128 02:42:12.018808  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:12.018963  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:12.019342  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:12.019430  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.020505  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:42:12.020531  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.019990  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:42:12.020713  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:12.020835  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:12.020865  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:42:12.020905  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:42:12.021840  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46837
	I1128 02:42:12.022018  340927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1128 02:42:12.024076  340927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1128 02:42:12.022093  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:12.022714  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:42:12.022930  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:12.023811  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:42:12.024401  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:42:12.024512  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:12.024605  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:12.025815  340927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1128 02:42:12.026422  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39133
	I1128 02:42:12.026434  340927 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1128 02:42:12.026494  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:12.026665  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:12.027880  340927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1128 02:42:12.028396  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42249
	I1128 02:42:12.028402  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38125
	I1128 02:42:12.028854  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:12.028909  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:12.029879  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:42:12.033185  340927 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1128 02:42:12.035011  340927 out.go:177]   - Using image docker.io/busybox:stable
	I1128 02:42:12.033783  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:12.033890  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:12.033930  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:12.034126  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:12.034918  340927 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1128 02:42:12.036872  340927 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 02:42:12.036963  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:12.036984  340927 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1128 02:42:12.037478  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:12.038354  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:12.039966  340927 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 02:42:12.039981  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 02:42:12.039999  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:42:12.038369  340927 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1128 02:42:12.038388  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1128 02:42:12.037709  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:12.038827  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:42:12.038857  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:12.040355  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:12.041461  340927 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1128 02:42:12.041483  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1128 02:42:12.041494  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:42:12.041502  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:42:12.041469  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:12.041643  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:12.043344  340927 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.12
	I1128 02:42:12.041933  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:12.042939  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:12.043926  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:42:12.045045  340927 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I1128 02:42:12.045063  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1128 02:42:12.045081  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:42:12.046867  340927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
	I1128 02:42:12.046873  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:42:12.045456  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.046676  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.046707  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.045424  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:12.047114  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:42:12.047264  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:42:12.048063  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:42:12.048099  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:42:12.048429  340927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1128 02:42:12.048634  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:42:12.048636  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:42:12.048660  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:42:12.048682  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:42:12.049386  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.049969  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:42:12.049992  340927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1128 02:42:12.050001  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:42:12.049426  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:42:12.050017  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.050061  340927 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.22.0
	I1128 02:42:12.053046  340927 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I1128 02:42:12.053069  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I1128 02:42:12.053089  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:42:12.051677  340927 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1128 02:42:12.053119  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I1128 02:42:12.053142  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:42:12.050103  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.050121  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.049419  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:42:12.050269  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:42:12.050442  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:42:12.050080  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.051819  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:42:12.051831  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:42:12.053647  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:42:12.053792  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:42:12.053856  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:42:12.054296  340927 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 02:42:12.054309  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 02:42:12.054325  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:42:12.054385  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:42:12.054562  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:42:12.056333  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.056777  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:42:12.056806  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.057214  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:42:12.057385  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:42:12.057504  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:42:12.057564  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.057614  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:42:12.058173  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:42:12.058238  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:42:12.058258  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.058282  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.058310  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:42:12.058458  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:42:12.058617  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:42:12.058706  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:42:12.058740  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:12.058914  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:42:12.059235  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:42:12.059448  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:42:12.059601  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	W1128 02:42:12.060693  340927 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:33988->192.168.39.100:22: read: connection reset by peer
	I1128 02:42:12.060720  340927 retry.go:31] will retry after 200.930282ms: ssh: handshake failed: read tcp 192.168.39.1:33988->192.168.39.100:22: read: connection reset by peer
	I1128 02:42:12.279948  340927 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 02:42:12.280819  340927 node_ready.go:35] waiting up to 6m0s for node "addons-681229" to be "Ready" ...
	I1128 02:42:12.308421  340927 node_ready.go:49] node "addons-681229" has status "Ready":"True"
	I1128 02:42:12.308453  340927 node_ready.go:38] duration metric: took 27.573936ms waiting for node "addons-681229" to be "Ready" ...
	I1128 02:42:12.308468  340927 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 02:42:12.311523  340927 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 02:42:12.311548  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1128 02:42:12.355337  340927 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1128 02:42:12.355369  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1128 02:42:12.426415  340927 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-28rxr" in "kube-system" namespace to be "Ready" ...
	I1128 02:42:12.435161  340927 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I1128 02:42:12.435197  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1128 02:42:12.439613  340927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1128 02:42:12.520441  340927 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 02:42:12.520475  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 02:42:12.585983  340927 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1128 02:42:12.586014  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1128 02:42:12.586633  340927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1128 02:42:12.586807  340927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1128 02:42:12.593917  340927 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1128 02:42:12.593944  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1128 02:42:12.620251  340927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 02:42:12.623712  340927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1128 02:42:12.634036  340927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1128 02:42:12.635677  340927 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I1128 02:42:12.635700  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I1128 02:42:12.637252  340927 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I1128 02:42:12.637272  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I1128 02:42:12.643916  340927 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1128 02:42:12.643940  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1128 02:42:12.655248  340927 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 02:42:12.655281  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 02:42:12.693221  340927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 02:42:12.777899  340927 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1128 02:42:12.777940  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1128 02:42:12.895420  340927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1128 02:42:12.895453  340927 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1128 02:42:12.895474  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1128 02:42:12.895890  340927 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1128 02:42:12.895906  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I1128 02:42:12.895978  340927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 02:42:12.971364  340927 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I1128 02:42:12.971401  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I1128 02:42:12.994663  340927 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1128 02:42:12.994715  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1128 02:42:13.037231  340927 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1128 02:42:13.037287  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1128 02:42:13.079139  340927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I1128 02:42:13.120214  340927 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I1128 02:42:13.120248  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I1128 02:42:13.123712  340927 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1128 02:42:13.123733  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1128 02:42:13.189686  340927 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1128 02:42:13.189713  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1128 02:42:13.214011  340927 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I1128 02:42:13.214061  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I1128 02:42:13.311208  340927 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1128 02:42:13.311244  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1128 02:42:13.315978  340927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1128 02:42:13.330084  340927 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I1128 02:42:13.330119  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I1128 02:42:13.369691  340927 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1128 02:42:13.369719  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1128 02:42:13.409937  340927 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I1128 02:42:13.409981  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I1128 02:42:13.443641  340927 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1128 02:42:13.443673  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1128 02:42:13.496892  340927 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I1128 02:42:13.496927  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I1128 02:42:13.507851  340927 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1128 02:42:13.507884  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1128 02:42:13.548347  340927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I1128 02:42:13.555146  340927 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1128 02:42:13.555172  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1128 02:42:13.582012  340927 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1128 02:42:13.582037  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1128 02:42:13.620421  340927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1128 02:42:15.835813  340927 pod_ready.go:102] pod "coredns-5dd5756b68-28rxr" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:16.474419  340927 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.194417823s)
	I1128 02:42:16.474457  340927 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 02:42:17.838015  340927 pod_ready.go:102] pod "coredns-5dd5756b68-28rxr" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:18.225640  340927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.785970114s)
	I1128 02:42:18.225679  340927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.639018956s)
	I1128 02:42:18.225702  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:18.225716  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:18.225721  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:18.225733  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:18.226503  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:18.226560  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:18.226575  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:18.226586  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:18.226505  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:18.226528  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:18.226539  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:18.226672  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:18.226686  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:18.226712  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:18.226835  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:18.226845  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:18.226903  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:18.227223  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:18.227229  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:18.227247  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:18.723800  340927 pod_ready.go:97] error getting pod "coredns-5dd5756b68-28rxr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-28rxr" not found
	I1128 02:42:18.723835  340927 pod_ready.go:81] duration metric: took 6.29737382s waiting for pod "coredns-5dd5756b68-28rxr" in "kube-system" namespace to be "Ready" ...
	E1128 02:42:18.723847  340927 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-28rxr" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-28rxr" not found
	I1128 02:42:18.723853  340927 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace to be "Ready" ...
	I1128 02:42:19.417900  340927 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1128 02:42:19.417948  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:42:19.420645  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:19.421115  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:42:19.421149  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:19.421348  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:42:19.421614  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:42:19.421801  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:42:19.421928  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:42:19.556246  340927 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1128 02:42:19.586506  340927 addons.go:231] Setting addon gcp-auth=true in "addons-681229"
	I1128 02:42:19.586572  340927 host.go:66] Checking if "addons-681229" exists ...
	I1128 02:42:19.586908  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:19.586939  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:19.602502  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43361
	I1128 02:42:19.603029  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:19.603621  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:19.603648  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:19.604067  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:19.604581  340927 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:42:19.604628  340927 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:42:19.619756  340927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35377
	I1128 02:42:19.620212  340927 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:42:19.620894  340927 main.go:141] libmachine: Using API Version  1
	I1128 02:42:19.620925  340927 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:42:19.621313  340927 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:42:19.621556  340927 main.go:141] libmachine: (addons-681229) Calling .GetState
	I1128 02:42:19.623464  340927 main.go:141] libmachine: (addons-681229) Calling .DriverName
	I1128 02:42:19.623720  340927 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1128 02:42:19.623753  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHHostname
	I1128 02:42:19.626466  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:19.626923  340927 main.go:141] libmachine: (addons-681229) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dd:03:de", ip: ""} in network mk-addons-681229: {Iface:virbr1 ExpiryTime:2023-11-28 03:41:30 +0000 UTC Type:0 Mac:52:54:00:dd:03:de Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:addons-681229 Clientid:01:52:54:00:dd:03:de}
	I1128 02:42:19.626956  340927 main.go:141] libmachine: (addons-681229) DBG | domain addons-681229 has defined IP address 192.168.39.100 and MAC address 52:54:00:dd:03:de in network mk-addons-681229
	I1128 02:42:19.627092  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHPort
	I1128 02:42:19.627292  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHKeyPath
	I1128 02:42:19.627474  340927 main.go:141] libmachine: (addons-681229) Calling .GetSSHUsername
	I1128 02:42:19.627637  340927 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/addons-681229/id_rsa Username:docker}
	I1128 02:42:19.676195  340927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.089350206s)
	I1128 02:42:19.676277  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:19.676292  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:19.676642  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:19.676663  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:19.676661  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:19.676673  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:19.676753  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:19.677003  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:19.677018  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:19.789772  340927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.169482309s)
	I1128 02:42:19.789832  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:19.789847  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:19.790145  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:19.790199  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:19.790208  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:19.790218  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:19.790226  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:19.790475  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:19.790525  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:19.790544  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:19.815679  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:19.815704  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:19.816076  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:19.816130  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:19.816139  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.018619  340927 pod_ready.go:102] pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:21.197874  340927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.563787638s)
	I1128 02:42:21.197931  340927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.504674838s)
	I1128 02:42:21.197977  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:21.197986  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:21.198003  340927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.302554079s)
	I1128 02:42:21.198028  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:21.197938  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:21.198076  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:21.198130  340927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.302125985s)
	I1128 02:42:21.198046  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:21.198153  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:21.198164  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:21.198197  340927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.574448674s)
	I1128 02:42:21.198228  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:21.198244  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:21.198307  340927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.882294607s)
	W1128 02:42:21.198343  340927 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1128 02:42:21.198366  340927 retry.go:31] will retry after 250.745507ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1128 02:42:21.198368  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:21.198406  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:21.198415  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.198437  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:21.198450  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:21.198463  340927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.650073486s)
	I1128 02:42:21.198520  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:21.198527  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:21.198732  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:21.198747  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.198757  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:21.198765  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:21.198821  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:21.198202  340927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.119020793s)
	I1128 02:42:21.198851  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:21.198863  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.198873  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:21.198873  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:21.198881  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:21.198886  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:21.198936  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:21.198946  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.198958  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:21.198969  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:21.198982  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:21.199009  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:21.199018  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.199019  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:21.199028  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:21.199037  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:21.199506  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:21.199546  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:21.199555  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.199836  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:21.199857  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:21.199883  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:21.199899  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.199909  340927 addons.go:467] Verifying addon registry=true in "addons-681229"
	I1128 02:42:21.202549  340927 out.go:177] * Verifying registry addon...
	I1128 02:42:21.200375  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:21.202618  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.202630  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:21.202640  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:21.200412  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:21.200437  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:21.202704  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.202715  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:21.202723  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:21.200479  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:21.202796  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.202813  340927 addons.go:467] Verifying addon metrics-server=true in "addons-681229"
	I1128 02:42:21.200500  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:21.202852  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.200519  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:21.201044  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:21.201061  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:21.202947  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.203160  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:21.203196  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:21.204693  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.203218  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:21.203235  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:21.204718  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.204727  340927 addons.go:467] Verifying addon ingress=true in "addons-681229"
	I1128 02:42:21.206359  340927 out.go:177] * Verifying ingress addon...
	I1128 02:42:21.205696  340927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1128 02:42:21.208694  340927 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1128 02:42:21.223765  340927 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1128 02:42:21.223795  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:21.234575  340927 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1128 02:42:21.234597  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:21.253353  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:21.277287  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:21.277327  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:21.277691  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:21.277712  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:21.277867  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:21.449607  340927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1128 02:42:21.841449  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:21.878854  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:22.037011  340927 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.413251951s)
	I1128 02:42:22.038947  340927 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I1128 02:42:22.037256  340927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.416769822s)
	I1128 02:42:22.041885  340927 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I1128 02:42:22.040460  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:22.043298  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:22.043371  340927 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1128 02:42:22.043393  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1128 02:42:22.043680  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:22.043725  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:22.043741  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:22.043902  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:22.043920  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:22.044145  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:22.044167  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:22.044182  340927 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-681229"
	I1128 02:42:22.046260  340927 out.go:177] * Verifying csi-hostpath-driver addon...
	I1128 02:42:22.049179  340927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1128 02:42:22.087764  340927 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1128 02:42:22.087790  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:22.097601  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:22.187329  340927 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1128 02:42:22.187363  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1128 02:42:22.319394  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:22.374741  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:22.518566  340927 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1128 02:42:22.518601  340927 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I1128 02:42:22.587301  340927 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1128 02:42:22.646261  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:22.786529  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:22.811378  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:23.146737  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:23.259448  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:23.287685  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:23.402504  340927 pod_ready.go:102] pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:23.605333  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:23.758938  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:23.785827  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:24.112193  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:24.267522  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:24.358586  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:24.372837  340927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.923155865s)
	I1128 02:42:24.372940  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:24.372962  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:24.373318  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:24.373336  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:24.373346  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:24.373354  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:24.373774  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:24.373780  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:24.373801  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:24.623604  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:24.642520  340927 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.055168554s)
	I1128 02:42:24.642587  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:24.642610  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:24.642943  340927 main.go:141] libmachine: (addons-681229) DBG | Closing plugin on server side
	I1128 02:42:24.643044  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:24.643083  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:24.643113  340927 main.go:141] libmachine: Making call to close driver server
	I1128 02:42:24.643126  340927 main.go:141] libmachine: (addons-681229) Calling .Close
	I1128 02:42:24.643469  340927 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:42:24.643509  340927 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:42:24.644623  340927 addons.go:467] Verifying addon gcp-auth=true in "addons-681229"
	I1128 02:42:24.646244  340927 out.go:177] * Verifying gcp-auth addon...
	I1128 02:42:24.648578  340927 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1128 02:42:24.671218  340927 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1128 02:42:24.671249  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:24.697096  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:24.771849  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:24.784372  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:25.123803  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:25.201517  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:25.259453  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:25.290626  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:25.409640  340927 pod_ready.go:102] pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:25.607456  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:25.701592  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:25.759246  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:25.782968  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:26.107970  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:26.205958  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:26.272323  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:26.294582  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:26.605976  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:26.702719  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:26.759317  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:26.783248  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:27.104682  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:27.201834  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:27.261688  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:27.282443  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:27.604322  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:27.701924  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:27.759154  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:27.787916  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:27.906361  340927 pod_ready.go:102] pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:28.105105  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:28.218552  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:28.266241  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:28.288280  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:28.625841  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:28.702157  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:28.806013  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:28.819079  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:29.107623  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:29.200827  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:29.260315  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:29.286731  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:29.606057  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:29.701236  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:29.758074  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:29.783693  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:30.137508  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:30.202648  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:30.258719  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:30.288687  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:30.399914  340927 pod_ready.go:102] pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:30.603656  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:30.701413  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:30.759400  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:30.785838  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:31.105070  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:31.201699  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:31.258331  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:31.285092  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:31.604516  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:31.703450  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:31.761675  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:31.786742  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:32.109085  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:32.201193  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:32.265149  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:32.302947  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:32.428519  340927 pod_ready.go:102] pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:33.005663  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:33.009669  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:33.009939  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:33.010197  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:33.120453  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:33.201791  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:33.258543  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:33.285123  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:33.604602  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:33.716643  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:33.758448  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:33.786629  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:34.108082  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:34.202061  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:34.262661  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:34.289170  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:34.608412  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:34.722790  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:34.770446  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:34.790409  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:34.902655  340927 pod_ready.go:102] pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:35.103791  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:35.202406  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:35.267544  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:35.286498  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:35.608592  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:35.703895  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:35.765775  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:35.792084  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:36.116479  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:36.213217  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:36.258597  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:36.293188  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:36.605341  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:36.703786  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:36.765649  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:36.811438  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:37.107964  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:37.201696  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:37.258041  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:37.290790  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:37.399623  340927 pod_ready.go:102] pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:37.607460  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:37.701461  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:37.759175  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:37.784471  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:38.113005  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:38.201844  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:38.262589  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:38.284242  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:38.603939  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:38.702925  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:38.758641  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:38.783636  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:39.103533  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:39.201394  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:39.259082  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:39.283226  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:39.607266  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:39.701689  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:39.759892  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:39.785691  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:39.904903  340927 pod_ready.go:102] pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:40.112282  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:40.203287  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:40.260587  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:40.283267  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:40.603497  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:40.702085  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:40.760529  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:40.786934  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:41.105760  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:41.201715  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:41.260655  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:41.282687  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:41.606036  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:41.704273  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:41.759873  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:41.788497  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:42.104288  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:42.202610  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:42.259226  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:42.283362  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:42.399381  340927 pod_ready.go:102] pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:42.604094  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:42.701020  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:42.763780  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:42.782792  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:43.117345  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:43.219328  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:43.259244  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:43.283969  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:43.604056  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:43.701709  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:43.759738  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:43.783378  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:44.105159  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:44.202723  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:44.258948  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:44.283830  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:44.400037  340927 pod_ready.go:102] pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:44.609539  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:44.702732  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:44.757832  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:44.782945  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:45.105433  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:45.203759  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:45.258009  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:45.284913  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:45.606187  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:45.701216  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:45.773764  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:45.785819  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:46.411300  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:46.413199  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:46.413833  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:46.417089  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:46.417535  340927 pod_ready.go:102] pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:46.604951  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:46.712435  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:46.765078  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:46.786555  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:47.103800  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:47.202619  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:47.258120  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:47.283927  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:47.603551  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:47.701808  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:47.762797  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:47.786168  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:48.104368  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:48.201637  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:48.259144  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:48.283571  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:48.604981  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:48.702275  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:48.761173  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:48.783404  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:48.898517  340927 pod_ready.go:102] pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:49.103991  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:49.202131  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:49.258768  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:49.283094  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:49.603312  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:49.701987  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:49.759650  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:49.785563  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:50.104457  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:50.201705  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:50.260021  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:50.284396  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:50.605960  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:50.701590  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:50.766455  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:50.793044  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:50.898984  340927 pod_ready.go:102] pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace has status "Ready":"False"
	I1128 02:42:51.104418  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:51.201211  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:51.258589  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:51.282725  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:51.604664  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:51.701923  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:51.759154  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:51.783426  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:52.111218  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:52.201089  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:52.258121  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:52.283779  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:52.399835  340927 pod_ready.go:92] pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace has status "Ready":"True"
	I1128 02:42:52.399863  340927 pod_ready.go:81] duration metric: took 33.676002126s waiting for pod "coredns-5dd5756b68-cdbbh" in "kube-system" namespace to be "Ready" ...
	I1128 02:42:52.399876  340927 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-681229" in "kube-system" namespace to be "Ready" ...
	I1128 02:42:52.406127  340927 pod_ready.go:92] pod "etcd-addons-681229" in "kube-system" namespace has status "Ready":"True"
	I1128 02:42:52.406154  340927 pod_ready.go:81] duration metric: took 6.270133ms waiting for pod "etcd-addons-681229" in "kube-system" namespace to be "Ready" ...
	I1128 02:42:52.406166  340927 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-681229" in "kube-system" namespace to be "Ready" ...
	I1128 02:42:52.430066  340927 pod_ready.go:92] pod "kube-apiserver-addons-681229" in "kube-system" namespace has status "Ready":"True"
	I1128 02:42:52.430102  340927 pod_ready.go:81] duration metric: took 23.927554ms waiting for pod "kube-apiserver-addons-681229" in "kube-system" namespace to be "Ready" ...
	I1128 02:42:52.430120  340927 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-681229" in "kube-system" namespace to be "Ready" ...
	I1128 02:42:52.449399  340927 pod_ready.go:92] pod "kube-controller-manager-addons-681229" in "kube-system" namespace has status "Ready":"True"
	I1128 02:42:52.449423  340927 pod_ready.go:81] duration metric: took 19.295564ms waiting for pod "kube-controller-manager-addons-681229" in "kube-system" namespace to be "Ready" ...
	I1128 02:42:52.449433  340927 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8bhzv" in "kube-system" namespace to be "Ready" ...
	I1128 02:42:52.460602  340927 pod_ready.go:92] pod "kube-proxy-8bhzv" in "kube-system" namespace has status "Ready":"True"
	I1128 02:42:52.460634  340927 pod_ready.go:81] duration metric: took 11.192878ms waiting for pod "kube-proxy-8bhzv" in "kube-system" namespace to be "Ready" ...
	I1128 02:42:52.460648  340927 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-681229" in "kube-system" namespace to be "Ready" ...
	I1128 02:42:52.604287  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:52.700965  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:52.759735  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:52.783149  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:52.796116  340927 pod_ready.go:92] pod "kube-scheduler-addons-681229" in "kube-system" namespace has status "Ready":"True"
	I1128 02:42:52.796150  340927 pod_ready.go:81] duration metric: took 335.492832ms waiting for pod "kube-scheduler-addons-681229" in "kube-system" namespace to be "Ready" ...
	I1128 02:42:52.796165  340927 pod_ready.go:38] duration metric: took 40.487681223s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 02:42:52.796190  340927 api_server.go:52] waiting for apiserver process to appear ...
	I1128 02:42:52.796259  340927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 02:42:52.825910  340927 api_server.go:72] duration metric: took 40.833202128s to wait for apiserver process to appear ...
	I1128 02:42:52.825944  340927 api_server.go:88] waiting for apiserver healthz status ...
	I1128 02:42:52.825967  340927 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I1128 02:42:52.832380  340927 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I1128 02:42:52.833672  340927 api_server.go:141] control plane version: v1.28.4
	I1128 02:42:52.833701  340927 api_server.go:131] duration metric: took 7.749748ms to wait for apiserver health ...
	I1128 02:42:52.833710  340927 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 02:42:53.005121  340927 system_pods.go:59] 18 kube-system pods found
	I1128 02:42:53.005154  340927 system_pods.go:61] "coredns-5dd5756b68-cdbbh" [3a68b4c3-caed-4986-a621-3995d6eaa52f] Running
	I1128 02:42:53.005159  340927 system_pods.go:61] "csi-hostpath-attacher-0" [deda611b-3f29-4504-b508-079ace8552cf] Running
	I1128 02:42:53.005166  340927 system_pods.go:61] "csi-hostpath-resizer-0" [c692c4bc-8d11-47d6-bb53-859d60eeedcb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1128 02:42:53.005172  340927 system_pods.go:61] "csi-hostpathplugin-2hxrx" [07d8ddb9-6816-41ad-8bfd-950b8ebd306f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1128 02:42:53.005179  340927 system_pods.go:61] "etcd-addons-681229" [68c7372c-8741-49c9-ba02-84a71abe9a9f] Running
	I1128 02:42:53.005183  340927 system_pods.go:61] "kube-apiserver-addons-681229" [73a25a74-1aec-4742-8c13-b7b16c2c7fec] Running
	I1128 02:42:53.005187  340927 system_pods.go:61] "kube-controller-manager-addons-681229" [b39740a7-eb6f-4af7-937f-24e46ecacda3] Running
	I1128 02:42:53.005192  340927 system_pods.go:61] "kube-ingress-dns-minikube" [351ff202-1958-405a-83bd-c4adf73855d3] Running
	I1128 02:42:53.005202  340927 system_pods.go:61] "kube-proxy-8bhzv" [3d22068c-8fbb-4267-892b-c66fa3fc1173] Running
	I1128 02:42:53.005207  340927 system_pods.go:61] "kube-scheduler-addons-681229" [312c39e2-80b1-4096-b5cf-775da27d6ff2] Running
	I1128 02:42:53.005221  340927 system_pods.go:61] "metrics-server-7c66d45ddc-fdxck" [f314eebf-cb93-487b-ad2b-9dff2d03acb1] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 02:42:53.005233  340927 system_pods.go:61] "nvidia-device-plugin-daemonset-bp85w" [8f81dd73-e882-4dc9-bd65-972a34309eed] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1128 02:42:53.005246  340927 system_pods.go:61] "registry-k72qb" [ab234015-31f7-499a-9928-a0ad70000068] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1128 02:42:53.005261  340927 system_pods.go:61] "registry-proxy-h9tnv" [f745a55b-172d-43c2-a850-12753f22f47a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1128 02:42:53.005272  340927 system_pods.go:61] "snapshot-controller-58dbcc7b99-kp6vk" [007aee99-d08e-4fbb-b32b-c1bacef6a79f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1128 02:42:53.005283  340927 system_pods.go:61] "snapshot-controller-58dbcc7b99-q75cq" [0f0375dd-fa95-4dfd-9558-eff74a5baf8e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1128 02:42:53.005290  340927 system_pods.go:61] "storage-provisioner" [0136fc3b-07ad-4cf9-a760-2f286eda9129] Running
	I1128 02:42:53.005296  340927 system_pods.go:61] "tiller-deploy-7b677967b9-mphnj" [166abcd3-ff81-472d-b1e6-c0aad1a85f5b] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1128 02:42:53.005305  340927 system_pods.go:74] duration metric: took 171.58787ms to wait for pod list to return data ...
	I1128 02:42:53.005316  340927 default_sa.go:34] waiting for default service account to be created ...
	I1128 02:42:53.104751  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:53.195805  340927 default_sa.go:45] found service account: "default"
	I1128 02:42:53.195835  340927 default_sa.go:55] duration metric: took 190.507737ms for default service account to be created ...
	I1128 02:42:53.195847  340927 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 02:42:53.201931  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:53.261802  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:53.286097  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:53.438485  340927 system_pods.go:86] 18 kube-system pods found
	I1128 02:42:53.438520  340927 system_pods.go:89] "coredns-5dd5756b68-cdbbh" [3a68b4c3-caed-4986-a621-3995d6eaa52f] Running
	I1128 02:42:53.438525  340927 system_pods.go:89] "csi-hostpath-attacher-0" [deda611b-3f29-4504-b508-079ace8552cf] Running
	I1128 02:42:53.438533  340927 system_pods.go:89] "csi-hostpath-resizer-0" [c692c4bc-8d11-47d6-bb53-859d60eeedcb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1128 02:42:53.438581  340927 system_pods.go:89] "csi-hostpathplugin-2hxrx" [07d8ddb9-6816-41ad-8bfd-950b8ebd306f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1128 02:42:53.438598  340927 system_pods.go:89] "etcd-addons-681229" [68c7372c-8741-49c9-ba02-84a71abe9a9f] Running
	I1128 02:42:53.438604  340927 system_pods.go:89] "kube-apiserver-addons-681229" [73a25a74-1aec-4742-8c13-b7b16c2c7fec] Running
	I1128 02:42:53.438612  340927 system_pods.go:89] "kube-controller-manager-addons-681229" [b39740a7-eb6f-4af7-937f-24e46ecacda3] Running
	I1128 02:42:53.438620  340927 system_pods.go:89] "kube-ingress-dns-minikube" [351ff202-1958-405a-83bd-c4adf73855d3] Running
	I1128 02:42:53.438624  340927 system_pods.go:89] "kube-proxy-8bhzv" [3d22068c-8fbb-4267-892b-c66fa3fc1173] Running
	I1128 02:42:53.438630  340927 system_pods.go:89] "kube-scheduler-addons-681229" [312c39e2-80b1-4096-b5cf-775da27d6ff2] Running
	I1128 02:42:53.438639  340927 system_pods.go:89] "metrics-server-7c66d45ddc-fdxck" [f314eebf-cb93-487b-ad2b-9dff2d03acb1] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 02:42:53.438649  340927 system_pods.go:89] "nvidia-device-plugin-daemonset-bp85w" [8f81dd73-e882-4dc9-bd65-972a34309eed] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1128 02:42:53.438658  340927 system_pods.go:89] "registry-k72qb" [ab234015-31f7-499a-9928-a0ad70000068] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1128 02:42:53.438665  340927 system_pods.go:89] "registry-proxy-h9tnv" [f745a55b-172d-43c2-a850-12753f22f47a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1128 02:42:53.438673  340927 system_pods.go:89] "snapshot-controller-58dbcc7b99-kp6vk" [007aee99-d08e-4fbb-b32b-c1bacef6a79f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1128 02:42:53.438680  340927 system_pods.go:89] "snapshot-controller-58dbcc7b99-q75cq" [0f0375dd-fa95-4dfd-9558-eff74a5baf8e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1128 02:42:53.438684  340927 system_pods.go:89] "storage-provisioner" [0136fc3b-07ad-4cf9-a760-2f286eda9129] Running
	I1128 02:42:53.438689  340927 system_pods.go:89] "tiller-deploy-7b677967b9-mphnj" [166abcd3-ff81-472d-b1e6-c0aad1a85f5b] Running / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I1128 02:42:53.438696  340927 system_pods.go:126] duration metric: took 242.843275ms to wait for k8s-apps to be running ...
	I1128 02:42:53.438704  340927 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 02:42:53.438751  340927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 02:42:53.481064  340927 system_svc.go:56] duration metric: took 42.348044ms WaitForService to wait for kubelet.
	I1128 02:42:53.481094  340927 kubeadm.go:581] duration metric: took 41.488397061s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 02:42:53.481113  340927 node_conditions.go:102] verifying NodePressure condition ...
	I1128 02:42:53.597450  340927 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 02:42:53.597513  340927 node_conditions.go:123] node cpu capacity is 2
	I1128 02:42:53.597526  340927 node_conditions.go:105] duration metric: took 116.409131ms to run NodePressure ...
	I1128 02:42:53.597540  340927 start.go:228] waiting for startup goroutines ...
	I1128 02:42:53.609199  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:53.701235  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:53.760290  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:53.783321  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:54.105387  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:54.202295  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:54.259352  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:54.285377  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:54.605580  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:54.701883  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:54.758641  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:54.782590  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:55.104436  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:55.201586  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:55.259169  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:55.285489  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:55.604393  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:55.701279  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:55.758583  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:55.783081  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:56.105222  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:56.201291  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:56.258291  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:56.283770  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:56.604109  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:56.701563  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:56.759718  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:56.783261  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:57.107893  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:57.201859  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:57.259759  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:57.285891  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:58.094984  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:58.119060  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:58.121621  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:58.131543  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:58.133403  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:58.203475  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:58.260188  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:58.288290  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:58.605592  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:58.702216  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:58.762101  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:58.806362  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:59.104042  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:59.203751  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:59.258859  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:59.288892  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:42:59.603568  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:42:59.701816  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:42:59.760366  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:42:59.788181  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:00.104036  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:00.202565  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:00.261564  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:00.288564  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:00.605615  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:00.702221  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:00.759911  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:00.783710  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:01.103466  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:01.201590  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:01.262515  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:01.283539  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:01.605088  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:01.701051  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:01.762092  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:01.789663  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:02.107409  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:02.206942  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:02.258871  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:02.284461  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:02.624730  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:02.715579  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:02.762313  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:02.801559  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:03.105649  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:03.202029  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:03.258107  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:03.283668  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:03.605208  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:03.702659  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:03.758829  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:03.783678  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:04.104026  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:04.207855  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:04.261797  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:04.286469  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:04.604293  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:04.701605  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:04.759589  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:04.782673  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:05.108538  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:05.201615  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:05.259327  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:05.284045  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:05.604386  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:05.701704  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:05.758142  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:05.783506  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:06.103819  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:06.201583  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:06.260360  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:06.283518  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:06.604430  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:06.700982  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:06.761135  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:06.783389  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:07.104265  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:07.201861  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:07.258702  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:07.282826  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:07.604547  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:07.701006  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:07.764776  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:07.782668  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:08.105601  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:08.201354  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:08.260809  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:08.291190  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:08.604251  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:08.701563  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:08.761401  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:08.785197  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:09.114524  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:09.200994  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:09.263118  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:09.285836  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:09.605069  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:09.703326  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:09.759986  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:09.783163  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:10.105869  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:10.201753  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:10.258817  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:10.284198  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:10.799822  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:10.805276  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:10.806457  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:10.807701  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:11.103936  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:11.203389  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:11.259194  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:11.283425  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:11.606655  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:11.703291  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:11.758961  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:11.783649  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:12.104134  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:12.201509  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:12.260761  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:12.283787  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:12.605299  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:12.704270  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:12.759268  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:12.784276  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:13.104584  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:13.201509  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:13.261370  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:13.283795  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:13.605408  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:13.701432  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:13.759367  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:13.783778  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:14.105064  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:14.201562  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:14.258991  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:14.283226  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:14.605865  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:14.700796  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:14.758256  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:14.783370  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:15.105328  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:15.200788  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:15.262151  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:15.284068  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:15.608148  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:15.701950  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:15.758460  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:15.783393  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:16.104543  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:16.201156  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:16.263574  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:16.284945  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1128 02:43:16.605122  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:16.702247  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:16.762117  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:16.783242  340927 kapi.go:107] duration metric: took 55.577546968s to wait for kubernetes.io/minikube-addons=registry ...
	I1128 02:43:17.105075  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:17.201976  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:17.258872  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:17.604552  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:17.701756  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:17.772447  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:18.104758  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:18.202891  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:18.263339  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:18.611329  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:18.701283  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:18.767263  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:19.104458  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:19.201787  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:19.257961  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:19.622067  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:19.701273  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:19.761580  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:20.104542  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:20.201611  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:20.258495  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:20.603858  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:20.708054  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:20.765260  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:21.104230  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:21.201380  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:21.259031  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:21.603473  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:21.701456  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:21.781408  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:22.108751  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:22.201868  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:22.260107  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:22.609605  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:22.706055  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:22.764982  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:23.110109  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:23.205371  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:23.259377  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:23.603803  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:23.702526  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:23.766967  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:24.104463  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:24.202226  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:24.259466  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:24.605027  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:24.704342  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:24.763265  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:25.105559  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:25.202700  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:25.258176  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:25.603835  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:25.702150  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:25.763067  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:26.122087  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:26.205883  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:26.263973  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:26.607857  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:26.701386  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:26.766429  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:27.109082  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:27.204111  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:27.258198  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:27.606887  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:27.706990  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:27.758780  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:28.105765  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:28.201759  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:28.263544  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:28.604042  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:28.701559  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:28.759168  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:29.108232  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:29.202556  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:29.259301  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:29.618141  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:29.701425  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:29.761721  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:30.104252  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:30.202742  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:30.261939  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:30.605116  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:30.701444  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:30.758876  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:31.104324  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:31.201769  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:31.257746  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:31.604759  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:31.701972  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:31.758834  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:32.104658  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:32.202089  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:32.259840  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:32.604403  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:32.701998  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:32.759564  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:33.107780  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:33.202828  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:33.262623  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:33.605274  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:33.702024  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:33.760209  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:34.104121  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:34.201818  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:34.258225  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:34.603411  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:34.701863  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:34.762701  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:35.104794  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:35.201712  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:35.258048  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:35.606553  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:35.701756  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:35.758104  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:36.105600  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:36.208283  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:36.259923  340927 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1128 02:43:36.605034  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:36.701357  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:36.760070  340927 kapi.go:107] duration metric: took 1m15.551372754s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1128 02:43:37.103181  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:37.201249  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:37.615334  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:37.703546  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:38.103965  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:38.201271  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:38.605151  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:38.701848  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:39.123171  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:39.204612  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:39.604755  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:39.702304  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:40.105035  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:40.202289  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:40.604824  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:40.701555  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:41.104973  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:41.200935  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1128 02:43:41.603417  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:41.701511  340927 kapi.go:107] duration metric: took 1m17.052929073s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1128 02:43:41.703409  340927 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-681229 cluster.
	I1128 02:43:41.705037  340927 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1128 02:43:41.706555  340927 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1128 02:43:42.104259  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:42.604636  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:43.104103  340927 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1128 02:43:43.603925  340927 kapi.go:107] duration metric: took 1m21.55474344s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1128 02:43:43.606224  340927 out.go:177] * Enabled addons: nvidia-device-plugin, cloud-spanner, storage-provisioner, storage-provisioner-rancher, metrics-server, inspektor-gadget, ingress-dns, helm-tiller, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1128 02:43:43.608105  340927 addons.go:502] enable addons completed in 1m31.713215834s: enabled=[nvidia-device-plugin cloud-spanner storage-provisioner storage-provisioner-rancher metrics-server inspektor-gadget ingress-dns helm-tiller default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1128 02:43:43.608148  340927 start.go:233] waiting for cluster config update ...
	I1128 02:43:43.608204  340927 start.go:242] writing updated cluster config ...
	I1128 02:43:43.608503  340927 ssh_runner.go:195] Run: rm -f paused
	I1128 02:43:43.659037  340927 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 02:43:43.661038  340927 out.go:177] * Done! kubectl is now configured to use "addons-681229" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 02:41:27 UTC, ends at Tue 2023-11-28 02:46:34 UTC. --
	Nov 28 02:46:33 addons-681229 crio[720]: time="2023-11-28 02:46:33.981253029Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701139593981235767,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529478,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=b442efc9-60b8-494a-a43c-27782e755472 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 02:46:33 addons-681229 crio[720]: time="2023-11-28 02:46:33.982338734Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c56b1773-06c7-4950-9df8-5f2e7be0faf3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:46:33 addons-681229 crio[720]: time="2023-11-28 02:46:33.982444267Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c56b1773-06c7-4950-9df8-5f2e7be0faf3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:46:33 addons-681229 crio[720]: time="2023-11-28 02:46:33.982757069Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21564909f5f24a0e0a6753a05b80a5f6a4fd9849fcb075a850cab8d34b02da46,PodSandboxId:bddf650de4949bc80bf16479e58445f28ca7b1dc1db3b2e0edbcf7f800d299b7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701139585339175405,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-77qst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33f7b449-605b-4a51-8203-267ed337aa7d,},Annotations:map[string]string{io.kubernetes.container.hash: c7e6959a,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc93aff8594addacbcd17ef05b5b1c3b91f61a707c182ab97cf8213f99d0cadf,PodSandboxId:5599a223aa67789a349a6c08aae4d1693239f6f7fbf960ce0b321d055136be2c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701139470563558826,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-x8dgt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 239d7f4a-b099-4dd4-9010-b78d4265aa47,},An
notations:map[string]string{io.kubernetes.container.hash: 3033062f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c9e61cecc600af1cd5d29c9bea946765510c8bbd07e4eec03f081a27f1f7d0,PodSandboxId:3c9e44f87bfb20a90cc5a3f9b50737e2223be7e755ebd0d1ce5820ce32032c04,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701139443940964506,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 3fc8c277-fb76-4adf-9332-9e20e1d69cb5,},Annotations:map[string]string{io.kubernetes.container.hash: f1b28216,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fbf7aea4957569ca413c5c22a8246ebc0af5b1360e560a579e792a86068af2,PodSandboxId:407652b6365b37e053b0fa5b202945a928e75e88ca89a9aef3a252a648e9c7dc,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701139420773474029,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ggn66,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f10766ce-25f5-4bad-92ed-28a61d85aa41,},Annotations:map[string]string{io.kubernetes.container.hash: b0b13085,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2434d9b3a9279b9dbd10fe6b6dd082c18690c0b7df1f23a8ed1339020bbfb28d,PodSandboxId:1e477088928c4bf64ac9f0f431f43080d60d02981aaab394339d1c0d5ab1b257,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17011393
98495894793,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-txstk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cc6373e9-92fc-4968-b128-d9c87ce11d40,},Annotations:map[string]string{io.kubernetes.container.hash: 1f696751,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a72c7806f600c6f429f8599e281c38b29f4fe7bcb3baa968162083af637191,PodSandboxId:788dc251f8d3e8b4ebcbab7cfbc157515907aa81a0d91aeaff9b8075c73360cd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701139397883017340,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xc959,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 701a9fc9-356f-465c-b1d2-c4379fa76eaf,},Annotations:map[string]string{io.kubernetes.container.hash: 50456202,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b180fc5a27a7c2270124009e3117c9353289f1c3e66096d3b3d8ab0cdb2451,PodSandboxId:bfb5741dc890c5fb93699b3a98eef36492af72affabbb220264062657bcafe1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701139354234032055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0136fc3b-07ad-4cf9-a760-2f286eda9129,},Annotations:map[string]string{io.kubernetes.container.hash: fedc0bd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa14d5134a83beb010a6e9503361e44d7b1690945ecbda5075b8d3f0aa96e8,PodSandboxId:77a78ef8a4d61062632f2a5bcebe290a77510487d13719ba27da671971ba3b0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,St
ate:CONTAINER_RUNNING,CreatedAt:1701139347345514621,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bhzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d22068c-8fbb-4267-892b-c66fa3fc1173,},Annotations:map[string]string{io.kubernetes.container.hash: ec0a3d64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be882fe1c700b4428ce34b4762b78d0d6ae946100aebb346fb9fa1dbf1c362c,PodSandboxId:b4aaf6b8c871fe9482a2cf68ae175fee61441557eb093c533c53e1120e34b65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
701139335857166108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a68b4c3-caed-4986-a621-3995d6eaa52f,},Annotations:map[string]string{io.kubernetes.container.hash: 2aa7884b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62689d201ceb6c299be3d209a07e2a8cbf581572089229fa064921f872ae063a,PodSandboxId:fe787ef5496dfb7c66bc2f3edd3b3d65c3423112fb4f6bac44cc08ddad2589b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f806
33b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701139311770703192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76a7b199b2fb1915478d0b46e10b7bee,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3f251fe079220440f65115616a825aca6d0a1205e4982bee63237204c6822ea,PodSandboxId:2dbd29f5075ebeb5e418a2ca2bb8c7d07fb42c7ed3653368ef13f60707137e22,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2
e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701139311821118825,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4635e827a28420a8c9c6a2c804a49cf,},Annotations:map[string]string{io.kubernetes.container.hash: 8e250250,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac13a485c6f3a440f1c4286a6f6f7f3a2ef8e0fc1338a9bd5ecf7cbd998d7277,PodSandboxId:bbd78f4a4f8202d1830db566eccde21ef2ad20f529fc5c733efad041706b0494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701139311639574226,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7593871174a7e990f18012204732231c,},Annotations:map[string]string{io.kubernetes.container.hash: 690ba766,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74db4e1fb34a69017498f21d33c28cfa8cefd4dbbdc55730116852ebbe197455,PodSandboxId:486f52cd349999efe801b09a1cf42cb1acbf026e111cb5ea7dc98790ed1cdc25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]str
ing{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701139311511191022,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adf76b94d5b9a6bf29faf7a30af8b90d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c56b1773-06c7-4950-9df8-5f2e7be0faf3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.022695127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=aa705592-9c61-4270-bc20-173f93422d4f name=/runtime.v1.RuntimeService/Version
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.022753503Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=aa705592-9c61-4270-bc20-173f93422d4f name=/runtime.v1.RuntimeService/Version
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.024258332Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6bf20d35-2b10-4ed6-b937-1778566d9a67 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.025519086Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701139594025502929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529478,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=6bf20d35-2b10-4ed6-b937-1778566d9a67 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.026219960Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2a1c437e-4923-4d88-9ab9-65e1b3e8e48e name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.026273094Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2a1c437e-4923-4d88-9ab9-65e1b3e8e48e name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.026639752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21564909f5f24a0e0a6753a05b80a5f6a4fd9849fcb075a850cab8d34b02da46,PodSandboxId:bddf650de4949bc80bf16479e58445f28ca7b1dc1db3b2e0edbcf7f800d299b7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701139585339175405,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-77qst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33f7b449-605b-4a51-8203-267ed337aa7d,},Annotations:map[string]string{io.kubernetes.container.hash: c7e6959a,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc93aff8594addacbcd17ef05b5b1c3b91f61a707c182ab97cf8213f99d0cadf,PodSandboxId:5599a223aa67789a349a6c08aae4d1693239f6f7fbf960ce0b321d055136be2c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701139470563558826,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-x8dgt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 239d7f4a-b099-4dd4-9010-b78d4265aa47,},An
notations:map[string]string{io.kubernetes.container.hash: 3033062f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c9e61cecc600af1cd5d29c9bea946765510c8bbd07e4eec03f081a27f1f7d0,PodSandboxId:3c9e44f87bfb20a90cc5a3f9b50737e2223be7e755ebd0d1ce5820ce32032c04,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701139443940964506,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 3fc8c277-fb76-4adf-9332-9e20e1d69cb5,},Annotations:map[string]string{io.kubernetes.container.hash: f1b28216,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fbf7aea4957569ca413c5c22a8246ebc0af5b1360e560a579e792a86068af2,PodSandboxId:407652b6365b37e053b0fa5b202945a928e75e88ca89a9aef3a252a648e9c7dc,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701139420773474029,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ggn66,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f10766ce-25f5-4bad-92ed-28a61d85aa41,},Annotations:map[string]string{io.kubernetes.container.hash: b0b13085,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2434d9b3a9279b9dbd10fe6b6dd082c18690c0b7df1f23a8ed1339020bbfb28d,PodSandboxId:1e477088928c4bf64ac9f0f431f43080d60d02981aaab394339d1c0d5ab1b257,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17011393
98495894793,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-txstk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cc6373e9-92fc-4968-b128-d9c87ce11d40,},Annotations:map[string]string{io.kubernetes.container.hash: 1f696751,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a72c7806f600c6f429f8599e281c38b29f4fe7bcb3baa968162083af637191,PodSandboxId:788dc251f8d3e8b4ebcbab7cfbc157515907aa81a0d91aeaff9b8075c73360cd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701139397883017340,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xc959,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 701a9fc9-356f-465c-b1d2-c4379fa76eaf,},Annotations:map[string]string{io.kubernetes.container.hash: 50456202,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b180fc5a27a7c2270124009e3117c9353289f1c3e66096d3b3d8ab0cdb2451,PodSandboxId:bfb5741dc890c5fb93699b3a98eef36492af72affabbb220264062657bcafe1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701139354234032055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0136fc3b-07ad-4cf9-a760-2f286eda9129,},Annotations:map[string]string{io.kubernetes.container.hash: fedc0bd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa14d5134a83beb010a6e9503361e44d7b1690945ecbda5075b8d3f0aa96e8,PodSandboxId:77a78ef8a4d61062632f2a5bcebe290a77510487d13719ba27da671971ba3b0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,St
ate:CONTAINER_RUNNING,CreatedAt:1701139347345514621,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bhzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d22068c-8fbb-4267-892b-c66fa3fc1173,},Annotations:map[string]string{io.kubernetes.container.hash: ec0a3d64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be882fe1c700b4428ce34b4762b78d0d6ae946100aebb346fb9fa1dbf1c362c,PodSandboxId:b4aaf6b8c871fe9482a2cf68ae175fee61441557eb093c533c53e1120e34b65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
701139335857166108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a68b4c3-caed-4986-a621-3995d6eaa52f,},Annotations:map[string]string{io.kubernetes.container.hash: 2aa7884b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62689d201ceb6c299be3d209a07e2a8cbf581572089229fa064921f872ae063a,PodSandboxId:fe787ef5496dfb7c66bc2f3edd3b3d65c3423112fb4f6bac44cc08ddad2589b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f806
33b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701139311770703192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76a7b199b2fb1915478d0b46e10b7bee,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3f251fe079220440f65115616a825aca6d0a1205e4982bee63237204c6822ea,PodSandboxId:2dbd29f5075ebeb5e418a2ca2bb8c7d07fb42c7ed3653368ef13f60707137e22,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2
e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701139311821118825,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4635e827a28420a8c9c6a2c804a49cf,},Annotations:map[string]string{io.kubernetes.container.hash: 8e250250,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac13a485c6f3a440f1c4286a6f6f7f3a2ef8e0fc1338a9bd5ecf7cbd998d7277,PodSandboxId:bbd78f4a4f8202d1830db566eccde21ef2ad20f529fc5c733efad041706b0494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701139311639574226,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7593871174a7e990f18012204732231c,},Annotations:map[string]string{io.kubernetes.container.hash: 690ba766,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74db4e1fb34a69017498f21d33c28cfa8cefd4dbbdc55730116852ebbe197455,PodSandboxId:486f52cd349999efe801b09a1cf42cb1acbf026e111cb5ea7dc98790ed1cdc25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]str
ing{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701139311511191022,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adf76b94d5b9a6bf29faf7a30af8b90d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2a1c437e-4923-4d88-9ab9-65e1b3e8e48e name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.062868849Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e0e81443-c5f3-4b93-a0a7-96c9d6ddc2c8 name=/runtime.v1.RuntimeService/Version
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.062958281Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e0e81443-c5f3-4b93-a0a7-96c9d6ddc2c8 name=/runtime.v1.RuntimeService/Version
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.063849944Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=936b5d37-a057-4292-b361-e96d500ee207 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.065419780Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701139594065348876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529478,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=936b5d37-a057-4292-b361-e96d500ee207 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.066238287Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=009a2491-ab56-475c-b0d0-c270d84debcd name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.066283642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=009a2491-ab56-475c-b0d0-c270d84debcd name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.066623916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21564909f5f24a0e0a6753a05b80a5f6a4fd9849fcb075a850cab8d34b02da46,PodSandboxId:bddf650de4949bc80bf16479e58445f28ca7b1dc1db3b2e0edbcf7f800d299b7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701139585339175405,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-77qst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33f7b449-605b-4a51-8203-267ed337aa7d,},Annotations:map[string]string{io.kubernetes.container.hash: c7e6959a,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc93aff8594addacbcd17ef05b5b1c3b91f61a707c182ab97cf8213f99d0cadf,PodSandboxId:5599a223aa67789a349a6c08aae4d1693239f6f7fbf960ce0b321d055136be2c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701139470563558826,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-x8dgt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 239d7f4a-b099-4dd4-9010-b78d4265aa47,},An
notations:map[string]string{io.kubernetes.container.hash: 3033062f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c9e61cecc600af1cd5d29c9bea946765510c8bbd07e4eec03f081a27f1f7d0,PodSandboxId:3c9e44f87bfb20a90cc5a3f9b50737e2223be7e755ebd0d1ce5820ce32032c04,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701139443940964506,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 3fc8c277-fb76-4adf-9332-9e20e1d69cb5,},Annotations:map[string]string{io.kubernetes.container.hash: f1b28216,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fbf7aea4957569ca413c5c22a8246ebc0af5b1360e560a579e792a86068af2,PodSandboxId:407652b6365b37e053b0fa5b202945a928e75e88ca89a9aef3a252a648e9c7dc,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701139420773474029,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ggn66,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f10766ce-25f5-4bad-92ed-28a61d85aa41,},Annotations:map[string]string{io.kubernetes.container.hash: b0b13085,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2434d9b3a9279b9dbd10fe6b6dd082c18690c0b7df1f23a8ed1339020bbfb28d,PodSandboxId:1e477088928c4bf64ac9f0f431f43080d60d02981aaab394339d1c0d5ab1b257,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17011393
98495894793,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-txstk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cc6373e9-92fc-4968-b128-d9c87ce11d40,},Annotations:map[string]string{io.kubernetes.container.hash: 1f696751,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a72c7806f600c6f429f8599e281c38b29f4fe7bcb3baa968162083af637191,PodSandboxId:788dc251f8d3e8b4ebcbab7cfbc157515907aa81a0d91aeaff9b8075c73360cd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701139397883017340,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xc959,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 701a9fc9-356f-465c-b1d2-c4379fa76eaf,},Annotations:map[string]string{io.kubernetes.container.hash: 50456202,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b180fc5a27a7c2270124009e3117c9353289f1c3e66096d3b3d8ab0cdb2451,PodSandboxId:bfb5741dc890c5fb93699b3a98eef36492af72affabbb220264062657bcafe1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701139354234032055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0136fc3b-07ad-4cf9-a760-2f286eda9129,},Annotations:map[string]string{io.kubernetes.container.hash: fedc0bd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa14d5134a83beb010a6e9503361e44d7b1690945ecbda5075b8d3f0aa96e8,PodSandboxId:77a78ef8a4d61062632f2a5bcebe290a77510487d13719ba27da671971ba3b0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,St
ate:CONTAINER_RUNNING,CreatedAt:1701139347345514621,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bhzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d22068c-8fbb-4267-892b-c66fa3fc1173,},Annotations:map[string]string{io.kubernetes.container.hash: ec0a3d64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be882fe1c700b4428ce34b4762b78d0d6ae946100aebb346fb9fa1dbf1c362c,PodSandboxId:b4aaf6b8c871fe9482a2cf68ae175fee61441557eb093c533c53e1120e34b65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
701139335857166108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a68b4c3-caed-4986-a621-3995d6eaa52f,},Annotations:map[string]string{io.kubernetes.container.hash: 2aa7884b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62689d201ceb6c299be3d209a07e2a8cbf581572089229fa064921f872ae063a,PodSandboxId:fe787ef5496dfb7c66bc2f3edd3b3d65c3423112fb4f6bac44cc08ddad2589b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f806
33b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701139311770703192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76a7b199b2fb1915478d0b46e10b7bee,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3f251fe079220440f65115616a825aca6d0a1205e4982bee63237204c6822ea,PodSandboxId:2dbd29f5075ebeb5e418a2ca2bb8c7d07fb42c7ed3653368ef13f60707137e22,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2
e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701139311821118825,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4635e827a28420a8c9c6a2c804a49cf,},Annotations:map[string]string{io.kubernetes.container.hash: 8e250250,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac13a485c6f3a440f1c4286a6f6f7f3a2ef8e0fc1338a9bd5ecf7cbd998d7277,PodSandboxId:bbd78f4a4f8202d1830db566eccde21ef2ad20f529fc5c733efad041706b0494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701139311639574226,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7593871174a7e990f18012204732231c,},Annotations:map[string]string{io.kubernetes.container.hash: 690ba766,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74db4e1fb34a69017498f21d33c28cfa8cefd4dbbdc55730116852ebbe197455,PodSandboxId:486f52cd349999efe801b09a1cf42cb1acbf026e111cb5ea7dc98790ed1cdc25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]str
ing{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701139311511191022,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adf76b94d5b9a6bf29faf7a30af8b90d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=009a2491-ab56-475c-b0d0-c270d84debcd name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.106626645Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=132276f1-a59b-4c95-a896-b5abc390a7d7 name=/runtime.v1.RuntimeService/Version
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.106688168Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=132276f1-a59b-4c95-a896-b5abc390a7d7 name=/runtime.v1.RuntimeService/Version
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.108666155Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=906a2f2f-9c63-4e23-b820-88f294877309 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.109810121Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701139594109790751,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:529478,},InodesUsed:&UInt64Value{Value:221,},},},}" file="go-grpc-middleware/chain.go:25" id=906a2f2f-9c63-4e23-b820-88f294877309 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.110642496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4e7d7438-1a87-413e-8ce7-89e9da15a6ef name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.110692510Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4e7d7438-1a87-413e-8ce7-89e9da15a6ef name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:46:34 addons-681229 crio[720]: time="2023-11-28 02:46:34.110967195Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:21564909f5f24a0e0a6753a05b80a5f6a4fd9849fcb075a850cab8d34b02da46,PodSandboxId:bddf650de4949bc80bf16479e58445f28ca7b1dc1db3b2e0edbcf7f800d299b7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701139585339175405,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-77qst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 33f7b449-605b-4a51-8203-267ed337aa7d,},Annotations:map[string]string{io.kubernetes.container.hash: c7e6959a,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc93aff8594addacbcd17ef05b5b1c3b91f61a707c182ab97cf8213f99d0cadf,PodSandboxId:5599a223aa67789a349a6c08aae4d1693239f6f7fbf960ce0b321d055136be2c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1,State:CONTAINER_RUNNING,CreatedAt:1701139470563558826,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-777fd4b855-x8dgt,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: 239d7f4a-b099-4dd4-9010-b78d4265aa47,},An
notations:map[string]string{io.kubernetes.container.hash: 3033062f,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8c9e61cecc600af1cd5d29c9bea946765510c8bbd07e4eec03f081a27f1f7d0,PodSandboxId:3c9e44f87bfb20a90cc5a3f9b50737e2223be7e755ebd0d1ce5820ce32032c04,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701139443940964506,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 3fc8c277-fb76-4adf-9332-9e20e1d69cb5,},Annotations:map[string]string{io.kubernetes.container.hash: f1b28216,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7fbf7aea4957569ca413c5c22a8246ebc0af5b1360e560a579e792a86068af2,PodSandboxId:407652b6365b37e053b0fa5b202945a928e75e88ca89a9aef3a252a648e9c7dc,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1701139420773474029,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ggn66,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: f10766ce-25f5-4bad-92ed-28a61d85aa41,},Annotations:map[string]string{io.kubernetes.container.hash: b0b13085,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2434d9b3a9279b9dbd10fe6b6dd082c18690c0b7df1f23a8ed1339020bbfb28d,PodSandboxId:1e477088928c4bf64ac9f0f431f43080d60d02981aaab394339d1c0d5ab1b257,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:17011393
98495894793,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-txstk,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cc6373e9-92fc-4968-b128-d9c87ce11d40,},Annotations:map[string]string{io.kubernetes.container.hash: 1f696751,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:22a72c7806f600c6f429f8599e281c38b29f4fe7bcb3baa968162083af637191,PodSandboxId:788dc251f8d3e8b4ebcbab7cfbc157515907aa81a0d91aeaff9b8075c73360cd,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1701139397883017340,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-xc959,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 701a9fc9-356f-465c-b1d2-c4379fa76eaf,},Annotations:map[string]string{io.kubernetes.container.hash: 50456202,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:82b180fc5a27a7c2270124009e3117c9353289f1c3e66096d3b3d8ab0cdb2451,PodSandboxId:bfb5741dc890c5fb93699b3a98eef36492af72affabbb220264062657bcafe1e,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c44
1c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701139354234032055,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0136fc3b-07ad-4cf9-a760-2f286eda9129,},Annotations:map[string]string{io.kubernetes.container.hash: fedc0bd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d9fa14d5134a83beb010a6e9503361e44d7b1690945ecbda5075b8d3f0aa96e8,PodSandboxId:77a78ef8a4d61062632f2a5bcebe290a77510487d13719ba27da671971ba3b0b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,St
ate:CONTAINER_RUNNING,CreatedAt:1701139347345514621,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8bhzv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d22068c-8fbb-4267-892b-c66fa3fc1173,},Annotations:map[string]string{io.kubernetes.container.hash: ec0a3d64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1be882fe1c700b4428ce34b4762b78d0d6ae946100aebb346fb9fa1dbf1c362c,PodSandboxId:b4aaf6b8c871fe9482a2cf68ae175fee61441557eb093c533c53e1120e34b65e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1
701139335857166108,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdbbh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a68b4c3-caed-4986-a621-3995d6eaa52f,},Annotations:map[string]string{io.kubernetes.container.hash: 2aa7884b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:62689d201ceb6c299be3d209a07e2a8cbf581572089229fa064921f872ae063a,PodSandboxId:fe787ef5496dfb7c66bc2f3edd3b3d65c3423112fb4f6bac44cc08ddad2589b7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f806
33b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701139311770703192,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76a7b199b2fb1915478d0b46e10b7bee,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c3f251fe079220440f65115616a825aca6d0a1205e4982bee63237204c6822ea,PodSandboxId:2dbd29f5075ebeb5e418a2ca2bb8c7d07fb42c7ed3653368ef13f60707137e22,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2
e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701139311821118825,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4635e827a28420a8c9c6a2c804a49cf,},Annotations:map[string]string{io.kubernetes.container.hash: 8e250250,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac13a485c6f3a440f1c4286a6f6f7f3a2ef8e0fc1338a9bd5ecf7cbd998d7277,PodSandboxId:bbd78f4a4f8202d1830db566eccde21ef2ad20f529fc5c733efad041706b0494,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations
:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701139311639574226,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7593871174a7e990f18012204732231c,},Annotations:map[string]string{io.kubernetes.container.hash: 690ba766,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74db4e1fb34a69017498f21d33c28cfa8cefd4dbbdc55730116852ebbe197455,PodSandboxId:486f52cd349999efe801b09a1cf42cb1acbf026e111cb5ea7dc98790ed1cdc25,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]str
ing{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701139311511191022,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-681229,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: adf76b94d5b9a6bf29faf7a30af8b90d,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4e7d7438-1a87-413e-8ce7-89e9da15a6ef name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	21564909f5f24       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   bddf650de4949       hello-world-app-5d77478584-77qst
	fc93aff8594ad       ghcr.io/headlamp-k8s/headlamp@sha256:6153bcbd375a0157858961b1138ed62321a2639b37826b37498bce16ee736cc1                        2 minutes ago       Running             headlamp                  0                   5599a223aa677       headlamp-777fd4b855-x8dgt
	b8c9e61cecc60       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                              2 minutes ago       Running             nginx                     0                   3c9e44f87bfb2       nginx
	e7fbf7aea4957       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 2 minutes ago       Running             gcp-auth                  0                   407652b6365b3       gcp-auth-d4c87556c-ggn66
	2434d9b3a9279       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             3 minutes ago       Exited              patch                     1                   1e477088928c4       ingress-nginx-admission-patch-txstk
	22a72c7806f60       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   3 minutes ago       Exited              create                    0                   788dc251f8d3e       ingress-nginx-admission-create-xc959
	82b180fc5a27a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago       Running             storage-provisioner       0                   bfb5741dc890c       storage-provisioner
	d9fa14d5134a8       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   77a78ef8a4d61       kube-proxy-8bhzv
	1be882fe1c700       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   b4aaf6b8c871f       coredns-5dd5756b68-cdbbh
	c3f251fe07922       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   2dbd29f5075eb       etcd-addons-681229
	62689d201ceb6       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   fe787ef5496df       kube-controller-manager-addons-681229
	ac13a485c6f3a       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   bbd78f4a4f820       kube-apiserver-addons-681229
	74db4e1fb34a6       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   486f52cd34999       kube-scheduler-addons-681229
	
	* 
	* ==> coredns [1be882fe1c700b4428ce34b4762b78d0d6ae946100aebb346fb9fa1dbf1c362c] <==
	* [INFO] 10.244.0.7:54408 - 17735 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000207653s
	[INFO] 10.244.0.7:40199 - 48232 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000252259s
	[INFO] 10.244.0.7:40199 - 32615 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000126607s
	[INFO] 10.244.0.7:55534 - 16968 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000252532s
	[INFO] 10.244.0.7:55534 - 27210 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068864s
	[INFO] 10.244.0.7:51848 - 4895 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000425276s
	[INFO] 10.244.0.7:51848 - 61726 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000315493s
	[INFO] 10.244.0.7:46567 - 59750 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000272006s
	[INFO] 10.244.0.7:46567 - 49763 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000078339s
	[INFO] 10.244.0.7:60963 - 63575 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000150532s
	[INFO] 10.244.0.7:60963 - 47530 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000153798s
	[INFO] 10.244.0.7:47592 - 10580 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000147335s
	[INFO] 10.244.0.7:47592 - 1366 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000113291s
	[INFO] 10.244.0.7:60841 - 6983 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000147334s
	[INFO] 10.244.0.7:60841 - 36936 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000091385s
	[INFO] 10.244.0.21:60133 - 39657 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000370512s
	[INFO] 10.244.0.21:58277 - 27683 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00010457s
	[INFO] 10.244.0.21:53768 - 36428 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000087235s
	[INFO] 10.244.0.21:55373 - 6057 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000131625s
	[INFO] 10.244.0.21:42187 - 4086 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000102279s
	[INFO] 10.244.0.21:50825 - 32673 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000094098s
	[INFO] 10.244.0.21:35054 - 53491 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000869697s
	[INFO] 10.244.0.21:33648 - 26615 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000432432s
	[INFO] 10.244.0.24:58736 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000323496s
	[INFO] 10.244.0.24:44117 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000175308s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-681229
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-681229
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=addons-681229
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T02_41_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-681229
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 02:41:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-681229
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 02:46:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 02:45:13 +0000   Tue, 28 Nov 2023 02:41:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 02:45:13 +0000   Tue, 28 Nov 2023 02:41:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 02:45:13 +0000   Tue, 28 Nov 2023 02:41:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 02:45:13 +0000   Tue, 28 Nov 2023 02:41:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.100
	  Hostname:    addons-681229
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 ffb42f81ae0f4824a4a1329c1fba6b58
	  System UUID:                ffb42f81-ae0f-4824-a4a1-329c1fba6b58
	  Boot ID:                    2cce1bb6-340b-4aec-8b60-028378ac4dee
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-77qst         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  gcp-auth                    gcp-auth-d4c87556c-ggn66                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m10s
	  headlamp                    headlamp-777fd4b855-x8dgt                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 coredns-5dd5756b68-cdbbh                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m23s
	  kube-system                 etcd-addons-681229                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-apiserver-addons-681229             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-controller-manager-addons-681229    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 kube-proxy-8bhzv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m23s
	  kube-system                 kube-scheduler-addons-681229             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m1s   kube-proxy       
	  Normal  Starting                 4m36s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m36s  kubelet          Node addons-681229 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m36s  kubelet          Node addons-681229 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m36s  kubelet          Node addons-681229 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m36s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m35s  kubelet          Node addons-681229 status is now: NodeReady
	  Normal  RegisteredNode           4m24s  node-controller  Node addons-681229 event: Registered Node addons-681229 in Controller
	
	* 
	* ==> dmesg <==
	* [  +4.469085] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.564906] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153834] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.044412] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.759647] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.106557] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.156422] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.113929] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.229100] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[  +9.398211] systemd-fstab-generator[915]: Ignoring "noauto" for root device
	[  +8.758232] systemd-fstab-generator[1254]: Ignoring "noauto" for root device
	[Nov28 02:42] kauditd_printk_skb: 4 callbacks suppressed
	[  +6.161086] kauditd_printk_skb: 64 callbacks suppressed
	[ +25.310895] kauditd_printk_skb: 14 callbacks suppressed
	[  +9.758684] kauditd_printk_skb: 20 callbacks suppressed
	[Nov28 02:43] kauditd_printk_skb: 7 callbacks suppressed
	[ +18.155396] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.251418] kauditd_printk_skb: 28 callbacks suppressed
	[Nov28 02:44] kauditd_printk_skb: 1 callbacks suppressed
	[ +25.325420] kauditd_printk_skb: 11 callbacks suppressed
	[Nov28 02:45] kauditd_printk_skb: 12 callbacks suppressed
	[Nov28 02:46] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [c3f251fe079220440f65115616a825aca6d0a1205e4982bee63237204c6822ea] <==
	* {"level":"warn","ts":"2023-11-28T02:43:10.790897Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.866659ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-11-28T02:43:10.790964Z","caller":"traceutil/trace.go:171","msg":"trace[1231121602] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:957; }","duration":"105.944055ms","start":"2023-11-28T02:43:10.685009Z","end":"2023-11-28T02:43:10.790953Z","steps":["trace[1231121602] 'range keys from in-memory index tree'  (duration: 104.670583ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-28T02:43:10.791278Z","caller":"traceutil/trace.go:171","msg":"trace[29047359] transaction","detail":"{read_only:false; response_revision:958; number_of_response:1; }","duration":"100.070276ms","start":"2023-11-28T02:43:10.691198Z","end":"2023-11-28T02:43:10.791268Z","steps":["trace[29047359] 'process raft request'  (duration: 95.098308ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T02:43:10.791726Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"195.96183ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82193"}
	{"level":"info","ts":"2023-11-28T02:43:10.79175Z","caller":"traceutil/trace.go:171","msg":"trace[1660167166] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:957; }","duration":"195.993581ms","start":"2023-11-28T02:43:10.59575Z","end":"2023-11-28T02:43:10.791744Z","steps":["trace[1660167166] 'range keys from in-memory index tree'  (duration: 193.949601ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-28T02:43:17.539183Z","caller":"traceutil/trace.go:171","msg":"trace[1198946038] transaction","detail":"{read_only:false; response_revision:981; number_of_response:1; }","duration":"163.282201ms","start":"2023-11-28T02:43:17.375831Z","end":"2023-11-28T02:43:17.539113Z","steps":["trace[1198946038] 'process raft request'  (duration: 163.18814ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-28T02:43:21.581317Z","caller":"traceutil/trace.go:171","msg":"trace[1713835909] transaction","detail":"{read_only:false; response_revision:1017; number_of_response:1; }","duration":"246.052918ms","start":"2023-11-28T02:43:21.33525Z","end":"2023-11-28T02:43:21.581303Z","steps":["trace[1713835909] 'process raft request'  (duration: 244.741603ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-28T02:44:09.120606Z","caller":"traceutil/trace.go:171","msg":"trace[1268054196] transaction","detail":"{read_only:false; response_revision:1373; number_of_response:1; }","duration":"111.838995ms","start":"2023-11-28T02:44:09.008688Z","end":"2023-11-28T02:44:09.120527Z","steps":["trace[1268054196] 'process raft request'  (duration: 111.724601ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-28T02:44:27.713014Z","caller":"traceutil/trace.go:171","msg":"trace[54596392] transaction","detail":"{read_only:false; response_revision:1474; number_of_response:1; }","duration":"104.437542ms","start":"2023-11-28T02:44:27.608562Z","end":"2023-11-28T02:44:27.712999Z","steps":["trace[54596392] 'process raft request'  (duration: 104.037925ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-28T02:44:30.445974Z","caller":"traceutil/trace.go:171","msg":"trace[367960254] transaction","detail":"{read_only:false; response_revision:1480; number_of_response:1; }","duration":"158.156852ms","start":"2023-11-28T02:44:30.287801Z","end":"2023-11-28T02:44:30.445958Z","steps":["trace[367960254] 'process raft request'  (duration: 158.05366ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-28T02:44:41.862757Z","caller":"traceutil/trace.go:171","msg":"trace[1000961282] linearizableReadLoop","detail":"{readStateIndex:1581; appliedIndex:1580; }","duration":"342.244079ms","start":"2023-11-28T02:44:41.520499Z","end":"2023-11-28T02:44:41.862743Z","steps":["trace[1000961282] 'read index received'  (duration: 342.074828ms)","trace[1000961282] 'applied index is now lower than readState.Index'  (duration: 168.901µs)"],"step_count":2}
	{"level":"info","ts":"2023-11-28T02:44:41.863053Z","caller":"traceutil/trace.go:171","msg":"trace[1962246131] transaction","detail":"{read_only:false; response_revision:1523; number_of_response:1; }","duration":"381.810401ms","start":"2023-11-28T02:44:41.481232Z","end":"2023-11-28T02:44:41.863043Z","steps":["trace[1962246131] 'process raft request'  (duration: 381.389094ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T02:44:41.863283Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T02:44:41.48122Z","time spent":"381.887582ms","remote":"127.0.0.1:39282","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1511 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2023-11-28T02:44:41.863548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"343.06334ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-11-28T02:44:41.863602Z","caller":"traceutil/trace.go:171","msg":"trace[1220904638] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1523; }","duration":"343.118648ms","start":"2023-11-28T02:44:41.520475Z","end":"2023-11-28T02:44:41.863594Z","steps":["trace[1220904638] 'agreement among raft nodes before linearized reading'  (duration: 343.004618ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T02:44:41.863629Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T02:44:41.520464Z","time spent":"343.15935ms","remote":"127.0.0.1:39258","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1135,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "}
	{"level":"warn","ts":"2023-11-28T02:44:41.863817Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"126.634058ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:5638"}
	{"level":"info","ts":"2023-11-28T02:44:41.863864Z","caller":"traceutil/trace.go:171","msg":"trace[1759448826] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1523; }","duration":"126.680225ms","start":"2023-11-28T02:44:41.737176Z","end":"2023-11-28T02:44:41.863856Z","steps":["trace[1759448826] 'agreement among raft nodes before linearized reading'  (duration: 126.603533ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T02:45:12.250042Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.18792ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2176518536614347291 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1614 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-28T02:45:12.250253Z","caller":"traceutil/trace.go:171","msg":"trace[2106307687] linearizableReadLoop","detail":"{readStateIndex:1695; appliedIndex:1694; }","duration":"213.030952ms","start":"2023-11-28T02:45:12.037213Z","end":"2023-11-28T02:45:12.250244Z","steps":["trace[2106307687] 'read index received'  (duration: 19.114282ms)","trace[2106307687] 'applied index is now lower than readState.Index'  (duration: 193.915797ms)"],"step_count":2}
	{"level":"info","ts":"2023-11-28T02:45:12.25062Z","caller":"traceutil/trace.go:171","msg":"trace[432893589] transaction","detail":"{read_only:false; response_revision:1628; number_of_response:1; }","duration":"218.944404ms","start":"2023-11-28T02:45:12.03166Z","end":"2023-11-28T02:45:12.250604Z","steps":["trace[432893589] 'process raft request'  (duration: 24.819696ms)","trace[432893589] 'compare'  (duration: 192.964223ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-28T02:45:12.250796Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.595431ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/specs/\" range_end:\"/registry/services/specs0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-11-28T02:45:12.250856Z","caller":"traceutil/trace.go:171","msg":"trace[554304546] range","detail":"{range_begin:/registry/services/specs/; range_end:/registry/services/specs0; response_count:0; response_revision:1628; }","duration":"213.660597ms","start":"2023-11-28T02:45:12.037187Z","end":"2023-11-28T02:45:12.250847Z","steps":["trace[554304546] 'agreement among raft nodes before linearized reading'  (duration: 213.573421ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T02:45:12.251035Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.793533ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo\" ","response":"range_response_count:1 size:1698"}
	{"level":"info","ts":"2023-11-28T02:45:12.25108Z","caller":"traceutil/trace.go:171","msg":"trace[1855453972] range","detail":"{range_begin:/registry/snapshot.storage.k8s.io/volumesnapshots/default/new-snapshot-demo; range_end:; response_count:1; response_revision:1628; }","duration":"146.841493ms","start":"2023-11-28T02:45:12.104232Z","end":"2023-11-28T02:45:12.251074Z","steps":["trace[1855453972] 'agreement among raft nodes before linearized reading'  (duration: 146.768396ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [e7fbf7aea4957569ca413c5c22a8246ebc0af5b1360e560a579e792a86068af2] <==
	* 2023/11/28 02:43:40 GCP Auth Webhook started!
	2023/11/28 02:43:44 Ready to marshal response ...
	2023/11/28 02:43:44 Ready to write response ...
	2023/11/28 02:43:44 Ready to marshal response ...
	2023/11/28 02:43:44 Ready to write response ...
	2023/11/28 02:43:53 Ready to marshal response ...
	2023/11/28 02:43:53 Ready to write response ...
	2023/11/28 02:43:55 Ready to marshal response ...
	2023/11/28 02:43:55 Ready to write response ...
	2023/11/28 02:43:59 Ready to marshal response ...
	2023/11/28 02:43:59 Ready to write response ...
	2023/11/28 02:44:05 Ready to marshal response ...
	2023/11/28 02:44:05 Ready to write response ...
	2023/11/28 02:44:23 Ready to marshal response ...
	2023/11/28 02:44:23 Ready to write response ...
	2023/11/28 02:44:23 Ready to marshal response ...
	2023/11/28 02:44:23 Ready to write response ...
	2023/11/28 02:44:24 Ready to marshal response ...
	2023/11/28 02:44:24 Ready to write response ...
	2023/11/28 02:44:35 Ready to marshal response ...
	2023/11/28 02:44:35 Ready to write response ...
	2023/11/28 02:45:02 Ready to marshal response ...
	2023/11/28 02:45:02 Ready to write response ...
	2023/11/28 02:46:23 Ready to marshal response ...
	2023/11/28 02:46:23 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  02:46:34 up 5 min,  0 users,  load average: 1.37, 2.00, 1.04
	Linux addons-681229 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [ac13a485c6f3a440f1c4286a6f6f7f3a2ef8e0fc1338a9bd5ecf7cbd998d7277] <==
	* I1128 02:43:59.747676       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.101.52.76"}
	I1128 02:44:01.056058       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E1128 02:44:12.423025       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I1128 02:44:23.948944       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.111.166.106"}
	I1128 02:44:48.870731       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1128 02:45:19.660328       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1128 02:45:19.660578       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1128 02:45:19.678643       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1128 02:45:19.678708       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1128 02:45:19.697081       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1128 02:45:19.698201       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1128 02:45:19.721945       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1128 02:45:19.722123       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1128 02:45:19.733461       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1128 02:45:19.734030       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1128 02:45:19.737956       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1128 02:45:19.738028       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1128 02:45:19.746183       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1128 02:45:19.746260       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1128 02:45:19.758952       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1128 02:45:19.759625       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1128 02:45:20.739175       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1128 02:45:20.761934       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1128 02:45:20.764032       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1128 02:46:23.520310       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.110.15.153"}
	
	* 
	* ==> kube-controller-manager [62689d201ceb6c299be3d209a07e2a8cbf581572089229fa064921f872ae063a] <==
	* I1128 02:45:41.355845       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I1128 02:45:41.355964       1 shared_informer.go:318] Caches are synced for garbage collector
	W1128 02:45:42.966605       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1128 02:45:42.966640       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1128 02:45:54.160574       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1128 02:45:54.160640       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1128 02:45:56.282826       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1128 02:45:56.282888       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1128 02:45:59.998223       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1128 02:45:59.998523       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W1128 02:46:09.085578       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1128 02:46:09.085716       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1128 02:46:23.287322       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I1128 02:46:23.321628       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-77qst"
	I1128 02:46:23.336008       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="49.807587ms"
	I1128 02:46:23.386470       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="50.359361ms"
	I1128 02:46:23.386612       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="50.154µs"
	I1128 02:46:23.400009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="102.753µs"
	I1128 02:46:26.006970       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I1128 02:46:26.012682       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-7c6974c4d8" duration="5.322µs"
	I1128 02:46:26.015864       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	W1128 02:46:26.150689       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1128 02:46:26.150750       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I1128 02:46:26.236075       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.511324ms"
	I1128 02:46:26.236290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="110.277µs"
	
	* 
	* ==> kube-proxy [d9fa14d5134a83beb010a6e9503361e44d7b1690945ecbda5075b8d3f0aa96e8] <==
	* I1128 02:42:31.247454       1 server_others.go:69] "Using iptables proxy"
	I1128 02:42:31.371740       1 node.go:141] Successfully retrieved node IP: 192.168.39.100
	I1128 02:42:32.377971       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1128 02:42:32.378091       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 02:42:32.577722       1 server_others.go:152] "Using iptables Proxier"
	I1128 02:42:32.578087       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 02:42:32.578981       1 server.go:846] "Version info" version="v1.28.4"
	I1128 02:42:32.579978       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 02:42:32.584191       1 config.go:188] "Starting service config controller"
	I1128 02:42:32.584253       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 02:42:32.584347       1 config.go:97] "Starting endpoint slice config controller"
	I1128 02:42:32.586092       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 02:42:32.592180       1 config.go:315] "Starting node config controller"
	I1128 02:42:32.592266       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 02:42:32.685509       1 shared_informer.go:318] Caches are synced for service config
	I1128 02:42:32.686801       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1128 02:42:32.693869       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [74db4e1fb34a69017498f21d33c28cfa8cefd4dbbdc55730116852ebbe197455] <==
	* E1128 02:41:55.619959       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1128 02:41:55.619988       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1128 02:41:55.619995       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1128 02:41:55.620092       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 02:41:55.620100       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 02:41:55.620219       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 02:41:55.620228       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1128 02:41:56.454006       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 02:41:56.454077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1128 02:41:56.480866       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 02:41:56.480921       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1128 02:41:56.676863       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 02:41:56.676951       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1128 02:41:56.711521       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1128 02:41:56.711594       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1128 02:41:56.794080       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1128 02:41:56.794134       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1128 02:41:56.886953       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 02:41:56.887004       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1128 02:41:56.887122       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 02:41:56.887160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1128 02:41:57.030474       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 02:41:57.030556       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1128 02:42:00.007207       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 02:41:27 UTC, ends at Tue 2023-11-28 02:46:34 UTC. --
	Nov 28 02:46:23 addons-681229 kubelet[1261]: I1128 02:46:23.341694    1261 memory_manager.go:346] "RemoveStaleState removing state" podUID="0f0375dd-fa95-4dfd-9558-eff74a5baf8e" containerName="volume-snapshot-controller"
	Nov 28 02:46:23 addons-681229 kubelet[1261]: I1128 02:46:23.341700    1261 memory_manager.go:346] "RemoveStaleState removing state" podUID="07d8ddb9-6816-41ad-8bfd-950b8ebd306f" containerName="liveness-probe"
	Nov 28 02:46:23 addons-681229 kubelet[1261]: I1128 02:46:23.341706    1261 memory_manager.go:346] "RemoveStaleState removing state" podUID="07d8ddb9-6816-41ad-8bfd-950b8ebd306f" containerName="csi-external-health-monitor-controller"
	Nov 28 02:46:23 addons-681229 kubelet[1261]: I1128 02:46:23.341711    1261 memory_manager.go:346] "RemoveStaleState removing state" podUID="deda611b-3f29-4504-b508-079ace8552cf" containerName="csi-attacher"
	Nov 28 02:46:23 addons-681229 kubelet[1261]: I1128 02:46:23.341721    1261 memory_manager.go:346] "RemoveStaleState removing state" podUID="c692c4bc-8d11-47d6-bb53-859d60eeedcb" containerName="csi-resizer"
	Nov 28 02:46:23 addons-681229 kubelet[1261]: I1128 02:46:23.491661    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/33f7b449-605b-4a51-8203-267ed337aa7d-gcp-creds\") pod \"hello-world-app-5d77478584-77qst\" (UID: \"33f7b449-605b-4a51-8203-267ed337aa7d\") " pod="default/hello-world-app-5d77478584-77qst"
	Nov 28 02:46:23 addons-681229 kubelet[1261]: I1128 02:46:23.491746    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78kpg\" (UniqueName: \"kubernetes.io/projected/33f7b449-605b-4a51-8203-267ed337aa7d-kube-api-access-78kpg\") pod \"hello-world-app-5d77478584-77qst\" (UID: \"33f7b449-605b-4a51-8203-267ed337aa7d\") " pod="default/hello-world-app-5d77478584-77qst"
	Nov 28 02:46:24 addons-681229 kubelet[1261]: I1128 02:46:24.803625    1261 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktf9q\" (UniqueName: \"kubernetes.io/projected/351ff202-1958-405a-83bd-c4adf73855d3-kube-api-access-ktf9q\") pod \"351ff202-1958-405a-83bd-c4adf73855d3\" (UID: \"351ff202-1958-405a-83bd-c4adf73855d3\") "
	Nov 28 02:46:24 addons-681229 kubelet[1261]: I1128 02:46:24.805971    1261 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/351ff202-1958-405a-83bd-c4adf73855d3-kube-api-access-ktf9q" (OuterVolumeSpecName: "kube-api-access-ktf9q") pod "351ff202-1958-405a-83bd-c4adf73855d3" (UID: "351ff202-1958-405a-83bd-c4adf73855d3"). InnerVolumeSpecName "kube-api-access-ktf9q". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 28 02:46:24 addons-681229 kubelet[1261]: I1128 02:46:24.904703    1261 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-ktf9q\" (UniqueName: \"kubernetes.io/projected/351ff202-1958-405a-83bd-c4adf73855d3-kube-api-access-ktf9q\") on node \"addons-681229\" DevicePath \"\""
	Nov 28 02:46:25 addons-681229 kubelet[1261]: I1128 02:46:25.194994    1261 scope.go:117] "RemoveContainer" containerID="a90fda659e44f947aed7d294484e1b90c3ee309450cd36a0dcbbfb0df6c87415"
	Nov 28 02:46:25 addons-681229 kubelet[1261]: I1128 02:46:25.267575    1261 scope.go:117] "RemoveContainer" containerID="a90fda659e44f947aed7d294484e1b90c3ee309450cd36a0dcbbfb0df6c87415"
	Nov 28 02:46:25 addons-681229 kubelet[1261]: E1128 02:46:25.268198    1261 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a90fda659e44f947aed7d294484e1b90c3ee309450cd36a0dcbbfb0df6c87415\": container with ID starting with a90fda659e44f947aed7d294484e1b90c3ee309450cd36a0dcbbfb0df6c87415 not found: ID does not exist" containerID="a90fda659e44f947aed7d294484e1b90c3ee309450cd36a0dcbbfb0df6c87415"
	Nov 28 02:46:25 addons-681229 kubelet[1261]: I1128 02:46:25.268255    1261 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a90fda659e44f947aed7d294484e1b90c3ee309450cd36a0dcbbfb0df6c87415"} err="failed to get container status \"a90fda659e44f947aed7d294484e1b90c3ee309450cd36a0dcbbfb0df6c87415\": rpc error: code = NotFound desc = could not find container \"a90fda659e44f947aed7d294484e1b90c3ee309450cd36a0dcbbfb0df6c87415\": container with ID starting with a90fda659e44f947aed7d294484e1b90c3ee309450cd36a0dcbbfb0df6c87415 not found: ID does not exist"
	Nov 28 02:46:26 addons-681229 kubelet[1261]: I1128 02:46:26.750062    1261 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="351ff202-1958-405a-83bd-c4adf73855d3" path="/var/lib/kubelet/pods/351ff202-1958-405a-83bd-c4adf73855d3/volumes"
	Nov 28 02:46:26 addons-681229 kubelet[1261]: I1128 02:46:26.750661    1261 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="701a9fc9-356f-465c-b1d2-c4379fa76eaf" path="/var/lib/kubelet/pods/701a9fc9-356f-465c-b1d2-c4379fa76eaf/volumes"
	Nov 28 02:46:26 addons-681229 kubelet[1261]: I1128 02:46:26.751104    1261 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="cc6373e9-92fc-4968-b128-d9c87ce11d40" path="/var/lib/kubelet/pods/cc6373e9-92fc-4968-b128-d9c87ce11d40/volumes"
	Nov 28 02:46:29 addons-681229 kubelet[1261]: I1128 02:46:29.445325    1261 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81bb9aef-4bd1-472d-8d02-dfa70d602203-webhook-cert\") pod \"81bb9aef-4bd1-472d-8d02-dfa70d602203\" (UID: \"81bb9aef-4bd1-472d-8d02-dfa70d602203\") "
	Nov 28 02:46:29 addons-681229 kubelet[1261]: I1128 02:46:29.445473    1261 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rw59j\" (UniqueName: \"kubernetes.io/projected/81bb9aef-4bd1-472d-8d02-dfa70d602203-kube-api-access-rw59j\") pod \"81bb9aef-4bd1-472d-8d02-dfa70d602203\" (UID: \"81bb9aef-4bd1-472d-8d02-dfa70d602203\") "
	Nov 28 02:46:29 addons-681229 kubelet[1261]: I1128 02:46:29.452903    1261 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bb9aef-4bd1-472d-8d02-dfa70d602203-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "81bb9aef-4bd1-472d-8d02-dfa70d602203" (UID: "81bb9aef-4bd1-472d-8d02-dfa70d602203"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 28 02:46:29 addons-681229 kubelet[1261]: I1128 02:46:29.453239    1261 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81bb9aef-4bd1-472d-8d02-dfa70d602203-kube-api-access-rw59j" (OuterVolumeSpecName: "kube-api-access-rw59j") pod "81bb9aef-4bd1-472d-8d02-dfa70d602203" (UID: "81bb9aef-4bd1-472d-8d02-dfa70d602203"). InnerVolumeSpecName "kube-api-access-rw59j". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Nov 28 02:46:29 addons-681229 kubelet[1261]: I1128 02:46:29.546628    1261 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/81bb9aef-4bd1-472d-8d02-dfa70d602203-webhook-cert\") on node \"addons-681229\" DevicePath \"\""
	Nov 28 02:46:29 addons-681229 kubelet[1261]: I1128 02:46:29.546662    1261 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rw59j\" (UniqueName: \"kubernetes.io/projected/81bb9aef-4bd1-472d-8d02-dfa70d602203-kube-api-access-rw59j\") on node \"addons-681229\" DevicePath \"\""
	Nov 28 02:46:30 addons-681229 kubelet[1261]: I1128 02:46:30.235295    1261 scope.go:117] "RemoveContainer" containerID="2c084cfe249f0775b5ad5fdad9cd89fd9b8fd4166dd08932e9c1037310f7f125"
	Nov 28 02:46:30 addons-681229 kubelet[1261]: I1128 02:46:30.749681    1261 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="81bb9aef-4bd1-472d-8d02-dfa70d602203" path="/var/lib/kubelet/pods/81bb9aef-4bd1-472d-8d02-dfa70d602203/volumes"
	
	* 
	* ==> storage-provisioner [82b180fc5a27a7c2270124009e3117c9353289f1c3e66096d3b3d8ab0cdb2451] <==
	* I1128 02:42:35.616820       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 02:42:35.695694       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 02:42:35.695843       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 02:42:35.768591       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 02:42:35.802046       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6e36907-6d8e-448c-907d-035ef0ddbc44", APIVersion:"v1", ResourceVersion:"829", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-681229_76baa695-d30a-4601-b7b6-4e1b0206aa0d became leader
	I1128 02:42:35.804535       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-681229_76baa695-d30a-4601-b7b6-4e1b0206aa0d!
	I1128 02:42:35.920212       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-681229_76baa695-d30a-4601-b7b6-4e1b0206aa0d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-681229 -n addons-681229
helpers_test.go:261: (dbg) Run:  kubectl --context addons-681229 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (156.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:171: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-681229
addons_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-681229: exit status 82 (2m1.214631312s)

                                                
                                                
-- stdout --
	* Stopping node "addons-681229"  ...
	* Stopping node "addons-681229"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:173: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-681229" : exit status 82
addons_test.go:175: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-681229
addons_test.go:175: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-681229: exit status 11 (21.741440796s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.100:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:177: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-681229" : exit status 11
addons_test.go:179: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-681229
addons_test.go:179: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-681229: exit status 11 (6.144530459s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.100:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:181: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-681229" : exit status 11
addons_test.go:184: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-681229
addons_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-681229: exit status 11 (6.143361827s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.100:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:186: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-681229" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.24s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (169.16s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:206: (dbg) Run:  kubectl --context ingress-addon-legacy-648725 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E1128 02:56:27.519706  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
addons_test.go:206: (dbg) Done: kubectl --context ingress-addon-legacy-648725 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (14.14641974s)
addons_test.go:231: (dbg) Run:  kubectl --context ingress-addon-legacy-648725 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:244: (dbg) Run:  kubectl --context ingress-addon-legacy-648725 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [0a2a23d3-f913-4a78-bbb4-769b100dfb31] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [0a2a23d3-f913-4a78-bbb4-769b100dfb31] Running
addons_test.go:249: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 12.015907323s
addons_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-648725 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E1128 02:58:34.222554  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 02:58:34.227856  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 02:58:34.238105  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 02:58:34.258442  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 02:58:34.298749  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 02:58:34.379100  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 02:58:34.539562  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 02:58:34.860180  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 02:58:35.501126  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 02:58:36.781732  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 02:58:39.343566  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 02:58:43.673571  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 02:58:44.464770  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 02:58:54.705216  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
addons_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-648725 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.941141201s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:277: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:285: (dbg) Run:  kubectl --context ingress-addon-legacy-648725 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-648725 ip
addons_test.go:296: (dbg) Run:  nslookup hello-john.test 192.168.39.42
addons_test.go:305: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-648725 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:305: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-648725 addons disable ingress-dns --alsologtostderr -v=1: (2.28906784s)
addons_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-648725 addons disable ingress --alsologtostderr -v=1
addons_test.go:310: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-648725 addons disable ingress --alsologtostderr -v=1: (7.7066006s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-648725 -n ingress-addon-legacy-648725
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-648725 logs -n 25
E1128 02:59:11.360958  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-648725 logs -n 25: (1.127561291s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                  |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-068418 ssh findmnt                                          | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC |                     |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-068418                                                   | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2152534551/001:/mount1 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| mount          | -p functional-068418                                                   | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC |                     |
	|                | /tmp/TestFunctionalparallelMountCmdVerifyCleanup2152534551/001:/mount3 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| ssh            | functional-068418 ssh findmnt                                          | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC | 28 Nov 23 02:54 UTC |
	|                | -T /mount1                                                             |                             |         |         |                     |                     |
	| ssh            | functional-068418 ssh findmnt                                          | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC | 28 Nov 23 02:54 UTC |
	|                | -T /mount2                                                             |                             |         |         |                     |                     |
	| ssh            | functional-068418 ssh findmnt                                          | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC | 28 Nov 23 02:54 UTC |
	|                | -T /mount3                                                             |                             |         |         |                     |                     |
	| mount          | -p functional-068418                                                   | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC |                     |
	|                | --kill=true                                                            |                             |         |         |                     |                     |
	| update-context | functional-068418                                                      | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC | 28 Nov 23 02:54 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-068418                                                      | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC | 28 Nov 23 02:54 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| update-context | functional-068418                                                      | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC | 28 Nov 23 02:54 UTC |
	|                | update-context                                                         |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                 |                             |         |         |                     |                     |
	| image          | functional-068418                                                      | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC | 28 Nov 23 02:54 UTC |
	|                | image ls --format short                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| ssh            | functional-068418 ssh pgrep                                            | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC |                     |
	|                | buildkitd                                                              |                             |         |         |                     |                     |
	| image          | functional-068418                                                      | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC | 28 Nov 23 02:54 UTC |
	|                | image ls --format yaml                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-068418                                                      | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC | 28 Nov 23 02:54 UTC |
	|                | image ls --format json                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-068418 image build -t                                       | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC | 28 Nov 23 02:54 UTC |
	|                | localhost/my-image:functional-068418                                   |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                       |                             |         |         |                     |                     |
	| image          | functional-068418                                                      | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC | 28 Nov 23 02:54 UTC |
	|                | image ls --format table                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	| image          | functional-068418 image ls                                             | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC | 28 Nov 23 02:54 UTC |
	| delete         | -p functional-068418                                                   | functional-068418           | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC | 28 Nov 23 02:54 UTC |
	| start          | -p ingress-addon-legacy-648725                                         | ingress-addon-legacy-648725 | jenkins | v1.32.0 | 28 Nov 23 02:54 UTC | 28 Nov 23 02:56 UTC |
	|                | --kubernetes-version=v1.18.20                                          |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                      |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                     |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                               |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-648725                                            | ingress-addon-legacy-648725 | jenkins | v1.32.0 | 28 Nov 23 02:56 UTC | 28 Nov 23 02:56 UTC |
	|                | addons enable ingress                                                  |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-648725                                            | ingress-addon-legacy-648725 | jenkins | v1.32.0 | 28 Nov 23 02:56 UTC | 28 Nov 23 02:56 UTC |
	|                | addons enable ingress-dns                                              |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                 |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-648725                                            | ingress-addon-legacy-648725 | jenkins | v1.32.0 | 28 Nov 23 02:56 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                          |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                           |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-648725 ip                                         | ingress-addon-legacy-648725 | jenkins | v1.32.0 | 28 Nov 23 02:59 UTC | 28 Nov 23 02:59 UTC |
	| addons         | ingress-addon-legacy-648725                                            | ingress-addon-legacy-648725 | jenkins | v1.32.0 | 28 Nov 23 02:59 UTC | 28 Nov 23 02:59 UTC |
	|                | addons disable ingress-dns                                             |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-648725                                            | ingress-addon-legacy-648725 | jenkins | v1.32.0 | 28 Nov 23 02:59 UTC | 28 Nov 23 02:59 UTC |
	|                | addons disable ingress                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                 |                             |         |         |                     |                     |
	|----------------|------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 02:54:24
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 02:54:24.075094  349264 out.go:296] Setting OutFile to fd 1 ...
	I1128 02:54:24.075255  349264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 02:54:24.075264  349264 out.go:309] Setting ErrFile to fd 2...
	I1128 02:54:24.075269  349264 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 02:54:24.075501  349264 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 02:54:24.076124  349264 out.go:303] Setting JSON to false
	I1128 02:54:24.077168  349264 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5814,"bootTime":1701134250,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 02:54:24.077234  349264 start.go:138] virtualization: kvm guest
	I1128 02:54:24.079531  349264 out.go:177] * [ingress-addon-legacy-648725] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 02:54:24.081124  349264 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 02:54:24.081102  349264 notify.go:220] Checking for updates...
	I1128 02:54:24.082592  349264 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 02:54:24.084163  349264 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 02:54:24.085817  349264 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 02:54:24.087415  349264 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 02:54:24.088791  349264 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 02:54:24.090421  349264 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 02:54:24.125322  349264 out.go:177] * Using the kvm2 driver based on user configuration
	I1128 02:54:24.126732  349264 start.go:298] selected driver: kvm2
	I1128 02:54:24.126748  349264 start.go:902] validating driver "kvm2" against <nil>
	I1128 02:54:24.126760  349264 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 02:54:24.127463  349264 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 02:54:24.127542  349264 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 02:54:24.141792  349264 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 02:54:24.141888  349264 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1128 02:54:24.142192  349264 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 02:54:24.142308  349264 cni.go:84] Creating CNI manager for ""
	I1128 02:54:24.142361  349264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 02:54:24.142378  349264 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1128 02:54:24.142389  349264 start_flags.go:323] config:
	{Name:ingress-addon-legacy-648725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-648725 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 02:54:24.142566  349264 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 02:54:24.144777  349264 out.go:177] * Starting control plane node ingress-addon-legacy-648725 in cluster ingress-addon-legacy-648725
	I1128 02:54:24.146293  349264 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1128 02:54:24.180809  349264 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1128 02:54:24.180834  349264 cache.go:56] Caching tarball of preloaded images
	I1128 02:54:24.181010  349264 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1128 02:54:24.182838  349264 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1128 02:54:24.184082  349264 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1128 02:54:24.218864  349264 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I1128 02:54:27.867451  349264 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1128 02:54:27.867555  349264 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I1128 02:54:28.875274  349264 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I1128 02:54:28.875650  349264 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/config.json ...
	I1128 02:54:28.875684  349264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/config.json: {Name:mkfb7b91e01486d2ff450cc22ef9709045333e39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:54:28.875882  349264 start.go:365] acquiring machines lock for ingress-addon-legacy-648725: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 02:54:28.875922  349264 start.go:369] acquired machines lock for "ingress-addon-legacy-648725" in 22.281µs
	I1128 02:54:28.875943  349264 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-648725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-648725 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 02:54:28.876050  349264 start.go:125] createHost starting for "" (driver="kvm2")
	I1128 02:54:28.878391  349264 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I1128 02:54:28.878608  349264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:54:28.878662  349264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:54:28.892830  349264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33373
	I1128 02:54:28.893348  349264 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:54:28.893950  349264 main.go:141] libmachine: Using API Version  1
	I1128 02:54:28.893967  349264 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:54:28.894357  349264 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:54:28.894568  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetMachineName
	I1128 02:54:28.894716  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .DriverName
	I1128 02:54:28.894858  349264 start.go:159] libmachine.API.Create for "ingress-addon-legacy-648725" (driver="kvm2")
	I1128 02:54:28.894897  349264 client.go:168] LocalClient.Create starting
	I1128 02:54:28.894928  349264 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem
	I1128 02:54:28.894964  349264 main.go:141] libmachine: Decoding PEM data...
	I1128 02:54:28.894980  349264 main.go:141] libmachine: Parsing certificate...
	I1128 02:54:28.895034  349264 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem
	I1128 02:54:28.895054  349264 main.go:141] libmachine: Decoding PEM data...
	I1128 02:54:28.895067  349264 main.go:141] libmachine: Parsing certificate...
	I1128 02:54:28.895083  349264 main.go:141] libmachine: Running pre-create checks...
	I1128 02:54:28.895094  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .PreCreateCheck
	I1128 02:54:28.895454  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetConfigRaw
	I1128 02:54:28.895931  349264 main.go:141] libmachine: Creating machine...
	I1128 02:54:28.895952  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .Create
	I1128 02:54:28.896155  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Creating KVM machine...
	I1128 02:54:28.897466  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found existing default KVM network
	I1128 02:54:28.898179  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:28.898036  349302 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a20}
	I1128 02:54:28.903302  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | trying to create private KVM network mk-ingress-addon-legacy-648725 192.168.39.0/24...
	I1128 02:54:28.972629  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Setting up store path in /home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725 ...
	I1128 02:54:28.972670  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Building disk image from file:///home/jenkins/minikube-integration/17671-333305/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso
	I1128 02:54:28.972684  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | private KVM network mk-ingress-addon-legacy-648725 192.168.39.0/24 created
	I1128 02:54:28.972708  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:28.972637  349302 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 02:54:28.972824  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Downloading /home/jenkins/minikube-integration/17671-333305/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17671-333305/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso...
	I1128 02:54:29.218880  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:29.218756  349302 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725/id_rsa...
	I1128 02:54:29.421120  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:29.420918  349302 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725/ingress-addon-legacy-648725.rawdisk...
	I1128 02:54:29.421203  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Writing magic tar header
	I1128 02:54:29.421232  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725 (perms=drwx------)
	I1128 02:54:29.421259  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305/.minikube/machines (perms=drwxr-xr-x)
	I1128 02:54:29.421276  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305/.minikube (perms=drwxr-xr-x)
	I1128 02:54:29.421328  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Writing SSH key tar header
	I1128 02:54:29.421363  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:29.421039  349302 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725 ...
	I1128 02:54:29.421380  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305 (perms=drwxrwxr-x)
	I1128 02:54:29.421394  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725
	I1128 02:54:29.421405  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305/.minikube/machines
	I1128 02:54:29.421415  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 02:54:29.421430  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305
	I1128 02:54:29.421446  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1128 02:54:29.421461  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1128 02:54:29.421475  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Checking permissions on dir: /home/jenkins
	I1128 02:54:29.421489  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1128 02:54:29.421523  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Creating domain...
	I1128 02:54:29.421534  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Checking permissions on dir: /home
	I1128 02:54:29.421550  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Skipping /home - not owner
	I1128 02:54:29.422445  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) define libvirt domain using xml: 
	I1128 02:54:29.422471  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) <domain type='kvm'>
	I1128 02:54:29.422485  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)   <name>ingress-addon-legacy-648725</name>
	I1128 02:54:29.422499  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)   <memory unit='MiB'>4096</memory>
	I1128 02:54:29.422512  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)   <vcpu>2</vcpu>
	I1128 02:54:29.422523  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)   <features>
	I1128 02:54:29.422536  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     <acpi/>
	I1128 02:54:29.422547  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     <apic/>
	I1128 02:54:29.422566  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     <pae/>
	I1128 02:54:29.422581  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     
	I1128 02:54:29.422601  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)   </features>
	I1128 02:54:29.422612  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)   <cpu mode='host-passthrough'>
	I1128 02:54:29.422626  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)   
	I1128 02:54:29.422637  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)   </cpu>
	I1128 02:54:29.422649  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)   <os>
	I1128 02:54:29.422669  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     <type>hvm</type>
	I1128 02:54:29.422683  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     <boot dev='cdrom'/>
	I1128 02:54:29.422696  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     <boot dev='hd'/>
	I1128 02:54:29.422709  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     <bootmenu enable='no'/>
	I1128 02:54:29.422722  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)   </os>
	I1128 02:54:29.422735  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)   <devices>
	I1128 02:54:29.422752  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     <disk type='file' device='cdrom'>
	I1128 02:54:29.422771  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)       <source file='/home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725/boot2docker.iso'/>
	I1128 02:54:29.422785  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)       <target dev='hdc' bus='scsi'/>
	I1128 02:54:29.422799  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)       <readonly/>
	I1128 02:54:29.422812  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     </disk>
	I1128 02:54:29.422840  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     <disk type='file' device='disk'>
	I1128 02:54:29.422871  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1128 02:54:29.422894  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)       <source file='/home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725/ingress-addon-legacy-648725.rawdisk'/>
	I1128 02:54:29.422912  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)       <target dev='hda' bus='virtio'/>
	I1128 02:54:29.422948  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     </disk>
	I1128 02:54:29.422988  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     <interface type='network'>
	I1128 02:54:29.423002  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)       <source network='mk-ingress-addon-legacy-648725'/>
	I1128 02:54:29.423013  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)       <model type='virtio'/>
	I1128 02:54:29.423025  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     </interface>
	I1128 02:54:29.423040  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     <interface type='network'>
	I1128 02:54:29.423056  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)       <source network='default'/>
	I1128 02:54:29.423069  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)       <model type='virtio'/>
	I1128 02:54:29.423083  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     </interface>
	I1128 02:54:29.423092  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     <serial type='pty'>
	I1128 02:54:29.423110  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)       <target port='0'/>
	I1128 02:54:29.423134  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     </serial>
	I1128 02:54:29.423152  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     <console type='pty'>
	I1128 02:54:29.423164  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)       <target type='serial' port='0'/>
	I1128 02:54:29.423175  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     </console>
	I1128 02:54:29.423193  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     <rng model='virtio'>
	I1128 02:54:29.423208  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)       <backend model='random'>/dev/random</backend>
	I1128 02:54:29.423219  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     </rng>
	I1128 02:54:29.423229  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     
	I1128 02:54:29.423241  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)     
	I1128 02:54:29.423254  349264 main.go:141] libmachine: (ingress-addon-legacy-648725)   </devices>
	I1128 02:54:29.423271  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) </domain>
	I1128 02:54:29.423287  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) 
	I1128 02:54:29.427194  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:64:94:db in network default
	I1128 02:54:29.427731  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Ensuring networks are active...
	I1128 02:54:29.427748  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:29.428305  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Ensuring network default is active
	I1128 02:54:29.428591  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Ensuring network mk-ingress-addon-legacy-648725 is active
	I1128 02:54:29.429134  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Getting domain xml...
	I1128 02:54:29.429748  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Creating domain...
	I1128 02:54:30.670121  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Waiting to get IP...
	I1128 02:54:30.670894  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:30.671320  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | unable to find current IP address of domain ingress-addon-legacy-648725 in network mk-ingress-addon-legacy-648725
	I1128 02:54:30.671353  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:30.671292  349302 retry.go:31] will retry after 271.660148ms: waiting for machine to come up
	I1128 02:54:30.945008  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:30.945474  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | unable to find current IP address of domain ingress-addon-legacy-648725 in network mk-ingress-addon-legacy-648725
	I1128 02:54:30.945506  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:30.945423  349302 retry.go:31] will retry after 365.992555ms: waiting for machine to come up
	I1128 02:54:31.313063  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:31.313561  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | unable to find current IP address of domain ingress-addon-legacy-648725 in network mk-ingress-addon-legacy-648725
	I1128 02:54:31.313586  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:31.313518  349302 retry.go:31] will retry after 444.319153ms: waiting for machine to come up
	I1128 02:54:31.759110  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:31.759551  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | unable to find current IP address of domain ingress-addon-legacy-648725 in network mk-ingress-addon-legacy-648725
	I1128 02:54:31.759580  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:31.759500  349302 retry.go:31] will retry after 544.571275ms: waiting for machine to come up
	I1128 02:54:32.305269  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:32.305714  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | unable to find current IP address of domain ingress-addon-legacy-648725 in network mk-ingress-addon-legacy-648725
	I1128 02:54:32.305745  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:32.305659  349302 retry.go:31] will retry after 491.349723ms: waiting for machine to come up
	I1128 02:54:32.798419  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:32.798874  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | unable to find current IP address of domain ingress-addon-legacy-648725 in network mk-ingress-addon-legacy-648725
	I1128 02:54:32.798924  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:32.798845  349302 retry.go:31] will retry after 741.97901ms: waiting for machine to come up
	I1128 02:54:33.542694  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:33.543176  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | unable to find current IP address of domain ingress-addon-legacy-648725 in network mk-ingress-addon-legacy-648725
	I1128 02:54:33.543206  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:33.543087  349302 retry.go:31] will retry after 1.063285359s: waiting for machine to come up
	I1128 02:54:34.607791  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:34.608358  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | unable to find current IP address of domain ingress-addon-legacy-648725 in network mk-ingress-addon-legacy-648725
	I1128 02:54:34.608388  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:34.608289  349302 retry.go:31] will retry after 1.130526068s: waiting for machine to come up
	I1128 02:54:35.740675  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:35.741091  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | unable to find current IP address of domain ingress-addon-legacy-648725 in network mk-ingress-addon-legacy-648725
	I1128 02:54:35.741125  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:35.741034  349302 retry.go:31] will retry after 1.522676121s: waiting for machine to come up
	I1128 02:54:37.265726  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:37.266132  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | unable to find current IP address of domain ingress-addon-legacy-648725 in network mk-ingress-addon-legacy-648725
	I1128 02:54:37.266158  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:37.266073  349302 retry.go:31] will retry after 1.911615085s: waiting for machine to come up
	I1128 02:54:39.179405  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:39.179796  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | unable to find current IP address of domain ingress-addon-legacy-648725 in network mk-ingress-addon-legacy-648725
	I1128 02:54:39.179828  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:39.179748  349302 retry.go:31] will retry after 1.872895999s: waiting for machine to come up
	I1128 02:54:41.053883  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:41.054393  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | unable to find current IP address of domain ingress-addon-legacy-648725 in network mk-ingress-addon-legacy-648725
	I1128 02:54:41.054425  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:41.054304  349302 retry.go:31] will retry after 2.854704424s: waiting for machine to come up
	I1128 02:54:43.910256  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:43.910685  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | unable to find current IP address of domain ingress-addon-legacy-648725 in network mk-ingress-addon-legacy-648725
	I1128 02:54:43.910710  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:43.910639  349302 retry.go:31] will retry after 3.019226654s: waiting for machine to come up
	I1128 02:54:46.933870  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:46.934318  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | unable to find current IP address of domain ingress-addon-legacy-648725 in network mk-ingress-addon-legacy-648725
	I1128 02:54:46.934354  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | I1128 02:54:46.934260  349302 retry.go:31] will retry after 4.332910343s: waiting for machine to come up
	I1128 02:54:51.269766  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.270206  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Found IP for machine: 192.168.39.42
	I1128 02:54:51.270230  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Reserving static IP address...
	I1128 02:54:51.270246  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has current primary IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.270611  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-648725", mac: "52:54:00:8c:99:b8", ip: "192.168.39.42"} in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.341536  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Reserved static IP address: 192.168.39.42
	I1128 02:54:51.341581  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Getting to WaitForSSH function...
	I1128 02:54:51.341594  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Waiting for SSH to be available...
	I1128 02:54:51.343809  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.344290  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:51.344324  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.344448  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Using SSH client type: external
	I1128 02:54:51.344479  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Using SSH private key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725/id_rsa (-rw-------)
	I1128 02:54:51.344516  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.42 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 02:54:51.344535  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | About to run SSH command:
	I1128 02:54:51.344550  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | exit 0
	I1128 02:54:51.436515  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | SSH cmd err, output: <nil>: 
	I1128 02:54:51.436705  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) KVM machine creation complete!
	I1128 02:54:51.437109  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetConfigRaw
	I1128 02:54:51.437698  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .DriverName
	I1128 02:54:51.437902  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .DriverName
	I1128 02:54:51.438062  349264 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1128 02:54:51.438075  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetState
	I1128 02:54:51.439213  349264 main.go:141] libmachine: Detecting operating system of created instance...
	I1128 02:54:51.439229  349264 main.go:141] libmachine: Waiting for SSH to be available...
	I1128 02:54:51.439235  349264 main.go:141] libmachine: Getting to WaitForSSH function...
	I1128 02:54:51.439243  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHHostname
	I1128 02:54:51.441557  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.441909  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:51.441944  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.442049  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHPort
	I1128 02:54:51.442205  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:51.442342  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:51.442493  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHUsername
	I1128 02:54:51.442692  349264 main.go:141] libmachine: Using SSH client type: native
	I1128 02:54:51.443025  349264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1128 02:54:51.443036  349264 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1128 02:54:51.563957  349264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 02:54:51.563996  349264 main.go:141] libmachine: Detecting the provisioner...
	I1128 02:54:51.564005  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHHostname
	I1128 02:54:51.566649  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.567069  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:51.567103  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.567232  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHPort
	I1128 02:54:51.567437  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:51.567600  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:51.567711  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHUsername
	I1128 02:54:51.567860  349264 main.go:141] libmachine: Using SSH client type: native
	I1128 02:54:51.568371  349264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1128 02:54:51.568387  349264 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1128 02:54:51.689614  349264 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g21ec34a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1128 02:54:51.689753  349264 main.go:141] libmachine: found compatible host: buildroot
	I1128 02:54:51.689770  349264 main.go:141] libmachine: Provisioning with buildroot...
	I1128 02:54:51.689784  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetMachineName
	I1128 02:54:51.690132  349264 buildroot.go:166] provisioning hostname "ingress-addon-legacy-648725"
	I1128 02:54:51.690168  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetMachineName
	I1128 02:54:51.690330  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHHostname
	I1128 02:54:51.692910  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.693249  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:51.693288  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.693443  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHPort
	I1128 02:54:51.693672  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:51.693836  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:51.693964  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHUsername
	I1128 02:54:51.694140  349264 main.go:141] libmachine: Using SSH client type: native
	I1128 02:54:51.694506  349264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1128 02:54:51.694523  349264 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-648725 && echo "ingress-addon-legacy-648725" | sudo tee /etc/hostname
	I1128 02:54:51.825822  349264 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-648725
	
	I1128 02:54:51.825857  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHHostname
	I1128 02:54:51.828732  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.829128  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:51.829164  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.829313  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHPort
	I1128 02:54:51.829545  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:51.829740  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:51.829972  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHUsername
	I1128 02:54:51.830203  349264 main.go:141] libmachine: Using SSH client type: native
	I1128 02:54:51.830530  349264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1128 02:54:51.830548  349264 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-648725' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-648725/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-648725' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 02:54:51.956382  349264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 02:54:51.956420  349264 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 02:54:51.956485  349264 buildroot.go:174] setting up certificates
	I1128 02:54:51.956532  349264 provision.go:83] configureAuth start
	I1128 02:54:51.956554  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetMachineName
	I1128 02:54:51.956834  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetIP
	I1128 02:54:51.959649  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.960008  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:51.960038  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.960200  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHHostname
	I1128 02:54:51.962455  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.962761  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:51.962790  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:51.962896  349264 provision.go:138] copyHostCerts
	I1128 02:54:51.962934  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 02:54:51.962978  349264 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 02:54:51.963005  349264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 02:54:51.963093  349264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 02:54:51.963198  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 02:54:51.963232  349264 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 02:54:51.963244  349264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 02:54:51.963395  349264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 02:54:51.963499  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 02:54:51.963532  349264 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 02:54:51.963615  349264 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 02:54:51.963687  349264 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 02:54:51.963782  349264 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-648725 san=[192.168.39.42 192.168.39.42 localhost 127.0.0.1 minikube ingress-addon-legacy-648725]
	I1128 02:54:52.022201  349264 provision.go:172] copyRemoteCerts
	I1128 02:54:52.022283  349264 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 02:54:52.022314  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHHostname
	I1128 02:54:52.024988  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.025317  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:52.025345  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.025505  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHPort
	I1128 02:54:52.025702  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:52.025845  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHUsername
	I1128 02:54:52.025979  349264 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725/id_rsa Username:docker}
	I1128 02:54:52.114279  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1128 02:54:52.114359  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1128 02:54:52.136035  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1128 02:54:52.136103  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 02:54:52.157228  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1128 02:54:52.157322  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 02:54:52.178628  349264 provision.go:86] duration metric: configureAuth took 222.076064ms
	I1128 02:54:52.178656  349264 buildroot.go:189] setting minikube options for container-runtime
	I1128 02:54:52.178862  349264 config.go:182] Loaded profile config "ingress-addon-legacy-648725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1128 02:54:52.178962  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHHostname
	I1128 02:54:52.181641  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.181965  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:52.182017  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.182189  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHPort
	I1128 02:54:52.182424  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:52.182619  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:52.182758  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHUsername
	I1128 02:54:52.183049  349264 main.go:141] libmachine: Using SSH client type: native
	I1128 02:54:52.183410  349264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1128 02:54:52.183429  349264 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 02:54:52.504375  349264 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 02:54:52.504407  349264 main.go:141] libmachine: Checking connection to Docker...
	I1128 02:54:52.504422  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetURL
	I1128 02:54:52.505680  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Using libvirt version 6000000
	I1128 02:54:52.507927  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.508283  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:52.508318  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.508444  349264 main.go:141] libmachine: Docker is up and running!
	I1128 02:54:52.508457  349264 main.go:141] libmachine: Reticulating splines...
	I1128 02:54:52.508464  349264 client.go:171] LocalClient.Create took 23.613556853s
	I1128 02:54:52.508486  349264 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-648725" took 23.613629076s
	I1128 02:54:52.508510  349264 start.go:300] post-start starting for "ingress-addon-legacy-648725" (driver="kvm2")
	I1128 02:54:52.508532  349264 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 02:54:52.508564  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .DriverName
	I1128 02:54:52.508832  349264 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 02:54:52.508871  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHHostname
	I1128 02:54:52.511175  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.511541  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:52.511573  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.511695  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHPort
	I1128 02:54:52.511895  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:52.512055  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHUsername
	I1128 02:54:52.512219  349264 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725/id_rsa Username:docker}
	I1128 02:54:52.603286  349264 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 02:54:52.607481  349264 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 02:54:52.607502  349264 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 02:54:52.607580  349264 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 02:54:52.607673  349264 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 02:54:52.607689  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> /etc/ssl/certs/3405152.pem
	I1128 02:54:52.607809  349264 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 02:54:52.617046  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 02:54:52.639333  349264 start.go:303] post-start completed in 130.80094ms
	I1128 02:54:52.639387  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetConfigRaw
	I1128 02:54:52.640001  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetIP
	I1128 02:54:52.642735  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.643078  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:52.643115  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.643387  349264 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/config.json ...
	I1128 02:54:52.643578  349264 start.go:128] duration metric: createHost completed in 23.767514383s
	I1128 02:54:52.643624  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHHostname
	I1128 02:54:52.645968  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.646298  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:52.646333  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.646464  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHPort
	I1128 02:54:52.646654  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:52.646789  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:52.646915  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHUsername
	I1128 02:54:52.647087  349264 main.go:141] libmachine: Using SSH client type: native
	I1128 02:54:52.647528  349264 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.42 22 <nil> <nil>}
	I1128 02:54:52.647546  349264 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 02:54:52.765433  349264 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701140092.745945959
	
	I1128 02:54:52.765461  349264 fix.go:206] guest clock: 1701140092.745945959
	I1128 02:54:52.765476  349264 fix.go:219] Guest: 2023-11-28 02:54:52.745945959 +0000 UTC Remote: 2023-11-28 02:54:52.64359168 +0000 UTC m=+28.619646449 (delta=102.354279ms)
	I1128 02:54:52.765539  349264 fix.go:190] guest clock delta is within tolerance: 102.354279ms
	I1128 02:54:52.765547  349264 start.go:83] releasing machines lock for "ingress-addon-legacy-648725", held for 23.889613382s
	I1128 02:54:52.765575  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .DriverName
	I1128 02:54:52.765843  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetIP
	I1128 02:54:52.768499  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.768824  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:52.768855  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.769083  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .DriverName
	I1128 02:54:52.769568  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .DriverName
	I1128 02:54:52.769751  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .DriverName
	I1128 02:54:52.769858  349264 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 02:54:52.769907  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHHostname
	I1128 02:54:52.769934  349264 ssh_runner.go:195] Run: cat /version.json
	I1128 02:54:52.769959  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHHostname
	I1128 02:54:52.772377  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.772651  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.772687  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:52.772713  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.772816  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHPort
	I1128 02:54:52.773026  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:52.773080  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:52.773111  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:52.773176  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHUsername
	I1128 02:54:52.773248  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHPort
	I1128 02:54:52.773417  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:54:52.773462  349264 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725/id_rsa Username:docker}
	I1128 02:54:52.773574  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHUsername
	I1128 02:54:52.773706  349264 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725/id_rsa Username:docker}
	I1128 02:54:52.858219  349264 ssh_runner.go:195] Run: systemctl --version
	I1128 02:54:52.887940  349264 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 02:54:53.046436  349264 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 02:54:53.052536  349264 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 02:54:53.052614  349264 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 02:54:53.067141  349264 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 02:54:53.067169  349264 start.go:472] detecting cgroup driver to use...
	I1128 02:54:53.067236  349264 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 02:54:53.081256  349264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 02:54:53.092606  349264 docker.go:203] disabling cri-docker service (if available) ...
	I1128 02:54:53.092669  349264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 02:54:53.104217  349264 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 02:54:53.115572  349264 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 02:54:53.222080  349264 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 02:54:53.335925  349264 docker.go:219] disabling docker service ...
	I1128 02:54:53.336024  349264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 02:54:53.350652  349264 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 02:54:53.362116  349264 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 02:54:53.459782  349264 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 02:54:53.559172  349264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 02:54:53.571045  349264 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 02:54:53.587910  349264 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1128 02:54:53.587989  349264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 02:54:53.597138  349264 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 02:54:53.597201  349264 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 02:54:53.605959  349264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 02:54:53.614648  349264 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 02:54:53.623478  349264 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 02:54:53.632515  349264 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 02:54:53.639935  349264 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 02:54:53.639987  349264 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 02:54:53.651160  349264 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 02:54:53.660237  349264 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 02:54:53.762335  349264 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 02:54:53.924270  349264 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 02:54:53.924362  349264 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 02:54:53.929376  349264 start.go:540] Will wait 60s for crictl version
	I1128 02:54:53.929445  349264 ssh_runner.go:195] Run: which crictl
	I1128 02:54:53.933187  349264 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 02:54:53.972504  349264 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 02:54:53.972584  349264 ssh_runner.go:195] Run: crio --version
	I1128 02:54:54.024645  349264 ssh_runner.go:195] Run: crio --version
	I1128 02:54:54.068761  349264 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I1128 02:54:54.070130  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetIP
	I1128 02:54:54.072706  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:54.073087  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:54:54.073126  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:54:54.073354  349264 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 02:54:54.078094  349264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 02:54:54.090661  349264 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I1128 02:54:54.090725  349264 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 02:54:54.127254  349264 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1128 02:54:54.127332  349264 ssh_runner.go:195] Run: which lz4
	I1128 02:54:54.131412  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1128 02:54:54.131514  349264 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 02:54:54.135546  349264 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 02:54:54.135579  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I1128 02:54:56.068915  349264 crio.go:444] Took 1.937421 seconds to copy over tarball
	I1128 02:54:56.068992  349264 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 02:54:59.048251  349264 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.979225015s)
	I1128 02:54:59.048279  349264 crio.go:451] Took 2.979333 seconds to extract the tarball
	I1128 02:54:59.048289  349264 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 02:54:59.091916  349264 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 02:54:59.214549  349264 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I1128 02:54:59.214585  349264 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1128 02:54:59.214651  349264 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 02:54:59.214677  349264 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I1128 02:54:59.214700  349264 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1128 02:54:59.214716  349264 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I1128 02:54:59.214774  349264 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I1128 02:54:59.214855  349264 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1128 02:54:59.214922  349264 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I1128 02:54:59.214946  349264 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I1128 02:54:59.215976  349264 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I1128 02:54:59.215982  349264 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I1128 02:54:59.215992  349264 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1128 02:54:59.215995  349264 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 02:54:59.216004  349264 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1128 02:54:59.215977  349264 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I1128 02:54:59.215976  349264 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1128 02:54:59.216254  349264 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I1128 02:54:59.404735  349264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I1128 02:54:59.404874  349264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I1128 02:54:59.413488  349264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I1128 02:54:59.417254  349264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1128 02:54:59.434682  349264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I1128 02:54:59.445105  349264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 02:54:59.450543  349264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I1128 02:54:59.514476  349264 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I1128 02:54:59.546636  349264 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I1128 02:54:59.546694  349264 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I1128 02:54:59.546745  349264 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I1128 02:54:59.546800  349264 ssh_runner.go:195] Run: which crictl
	I1128 02:54:59.546701  349264 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I1128 02:54:59.546875  349264 ssh_runner.go:195] Run: which crictl
	I1128 02:54:59.553046  349264 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I1128 02:54:59.553090  349264 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I1128 02:54:59.553151  349264 ssh_runner.go:195] Run: which crictl
	I1128 02:54:59.581319  349264 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1128 02:54:59.581398  349264 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1128 02:54:59.581448  349264 ssh_runner.go:195] Run: which crictl
	I1128 02:54:59.618343  349264 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I1128 02:54:59.618399  349264 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I1128 02:54:59.618443  349264 ssh_runner.go:195] Run: which crictl
	I1128 02:54:59.725489  349264 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I1128 02:54:59.725533  349264 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I1128 02:54:59.725545  349264 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I1128 02:54:59.725551  349264 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I1128 02:54:59.725595  349264 ssh_runner.go:195] Run: which crictl
	I1128 02:54:59.725595  349264 ssh_runner.go:195] Run: which crictl
	I1128 02:54:59.725807  349264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I1128 02:54:59.725827  349264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I1128 02:54:59.725837  349264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I1128 02:54:59.725913  349264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1128 02:54:59.725974  349264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I1128 02:54:59.847299  349264 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1128 02:54:59.847362  349264 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I1128 02:54:59.847410  349264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I1128 02:54:59.847420  349264 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I1128 02:54:59.847512  349264 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I1128 02:54:59.847550  349264 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1128 02:54:59.847554  349264 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I1128 02:54:59.902848  349264 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I1128 02:54:59.902858  349264 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I1128 02:54:59.902941  349264 cache_images.go:92] LoadImages completed in 688.341972ms
	W1128 02:54:59.903027  349264 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0: no such file or directory
	I1128 02:54:59.903111  349264 ssh_runner.go:195] Run: crio config
	I1128 02:54:59.966121  349264 cni.go:84] Creating CNI manager for ""
	I1128 02:54:59.966153  349264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 02:54:59.966176  349264 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 02:54:59.966198  349264 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.42 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-648725 NodeName:ingress-addon-legacy-648725 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.42"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.42 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1128 02:54:59.966351  349264 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.42
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-648725"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.42
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.42"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 02:54:59.966472  349264 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-648725 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.42
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-648725 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 02:54:59.966556  349264 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1128 02:54:59.976136  349264 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 02:54:59.976231  349264 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 02:54:59.985351  349264 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I1128 02:55:00.001341  349264 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1128 02:55:00.017850  349264 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1128 02:55:00.034351  349264 ssh_runner.go:195] Run: grep 192.168.39.42	control-plane.minikube.internal$ /etc/hosts
	I1128 02:55:00.038056  349264 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.42	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 02:55:00.049586  349264 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725 for IP: 192.168.39.42
	I1128 02:55:00.049649  349264 certs.go:190] acquiring lock for shared ca certs: {Name:mk57c0483467fb0022a439f1b546194ca653d1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:55:00.049864  349264 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key
	I1128 02:55:00.049941  349264 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key
	I1128 02:55:00.050048  349264 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.key
	I1128 02:55:00.050064  349264 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt with IP's: []
	I1128 02:55:00.186844  349264 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt ...
	I1128 02:55:00.186877  349264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: {Name:mk976c9fdeec8105392aca5c9b2c56a92de07431 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:55:00.187103  349264 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.key ...
	I1128 02:55:00.187121  349264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.key: {Name:mka0828106a8321b4623c28d1ab266ab32f5b62c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:55:00.187232  349264 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/apiserver.key.95c56caa
	I1128 02:55:00.187258  349264 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/apiserver.crt.95c56caa with IP's: [192.168.39.42 10.96.0.1 127.0.0.1 10.0.0.1]
	I1128 02:55:00.318360  349264 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/apiserver.crt.95c56caa ...
	I1128 02:55:00.318391  349264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/apiserver.crt.95c56caa: {Name:mke7aa42c718a7d3c2511518d0c714040ab0a911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:55:00.318574  349264 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/apiserver.key.95c56caa ...
	I1128 02:55:00.318593  349264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/apiserver.key.95c56caa: {Name:mka53b8eb90d7ac1e7463f407a7618c8aac9db94 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:55:00.318691  349264 certs.go:337] copying /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/apiserver.crt.95c56caa -> /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/apiserver.crt
	I1128 02:55:00.318844  349264 certs.go:341] copying /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/apiserver.key.95c56caa -> /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/apiserver.key
	I1128 02:55:00.318933  349264 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/proxy-client.key
	I1128 02:55:00.318958  349264 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/proxy-client.crt with IP's: []
	I1128 02:55:00.611680  349264 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/proxy-client.crt ...
	I1128 02:55:00.611717  349264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/proxy-client.crt: {Name:mkbd36033298dbde64aa294079907fb2db538588 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:55:00.611922  349264 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/proxy-client.key ...
	I1128 02:55:00.611941  349264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/proxy-client.key: {Name:mkcdeee2d8bd98dc50597f5073e0ec99e52b8f34 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:55:00.612038  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1128 02:55:00.612064  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1128 02:55:00.612092  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1128 02:55:00.612113  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1128 02:55:00.612127  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1128 02:55:00.612150  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1128 02:55:00.612170  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1128 02:55:00.612188  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1128 02:55:00.612264  349264 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem (1338 bytes)
	W1128 02:55:00.612318  349264 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515_empty.pem, impossibly tiny 0 bytes
	I1128 02:55:00.612334  349264 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 02:55:00.612367  349264 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem (1078 bytes)
	I1128 02:55:00.612401  349264 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem (1123 bytes)
	I1128 02:55:00.612442  349264 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem (1675 bytes)
	I1128 02:55:00.612503  349264 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 02:55:00.612543  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1128 02:55:00.612563  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem -> /usr/share/ca-certificates/340515.pem
	I1128 02:55:00.612581  349264 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> /usr/share/ca-certificates/3405152.pem
	I1128 02:55:00.613265  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 02:55:00.636367  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 02:55:00.658675  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 02:55:00.680512  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 02:55:00.702788  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 02:55:00.725371  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 02:55:00.747324  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 02:55:00.768784  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 02:55:00.790480  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 02:55:00.811280  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem --> /usr/share/ca-certificates/340515.pem (1338 bytes)
	I1128 02:55:00.832405  349264 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /usr/share/ca-certificates/3405152.pem (1708 bytes)
	I1128 02:55:00.854340  349264 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 02:55:00.869832  349264 ssh_runner.go:195] Run: openssl version
	I1128 02:55:00.875316  349264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 02:55:00.885828  349264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 02:55:00.890483  349264 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 02:55:00.890562  349264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 02:55:00.896086  349264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 02:55:00.906540  349264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340515.pem && ln -fs /usr/share/ca-certificates/340515.pem /etc/ssl/certs/340515.pem"
	I1128 02:55:00.917074  349264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340515.pem
	I1128 02:55:00.921747  349264 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 02:55:00.921810  349264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340515.pem
	I1128 02:55:00.927621  349264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340515.pem /etc/ssl/certs/51391683.0"
	I1128 02:55:00.937481  349264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3405152.pem && ln -fs /usr/share/ca-certificates/3405152.pem /etc/ssl/certs/3405152.pem"
	I1128 02:55:00.947434  349264 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3405152.pem
	I1128 02:55:00.951622  349264 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 02:55:00.951671  349264 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3405152.pem
	I1128 02:55:00.956961  349264 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3405152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 02:55:00.966971  349264 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 02:55:00.971080  349264 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 02:55:00.971133  349264 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-648725 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-648725 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 02:55:00.971228  349264 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 02:55:00.971274  349264 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 02:55:01.010004  349264 cri.go:89] found id: ""
	I1128 02:55:01.010094  349264 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 02:55:01.019581  349264 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 02:55:01.028694  349264 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 02:55:01.037767  349264 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 02:55:01.037813  349264 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1128 02:55:01.092986  349264 kubeadm.go:322] W1128 02:55:01.084748     962 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1128 02:55:01.219708  349264 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 02:55:03.857966  349264 kubeadm.go:322] W1128 02:55:03.851527     962 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1128 02:55:03.861456  349264 kubeadm.go:322] W1128 02:55:03.854986     962 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1128 02:55:14.365846  349264 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I1128 02:55:14.365937  349264 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 02:55:14.366038  349264 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 02:55:14.366134  349264 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 02:55:14.366211  349264 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 02:55:14.366370  349264 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 02:55:14.366465  349264 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 02:55:14.366547  349264 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 02:55:14.366644  349264 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 02:55:14.368219  349264 out.go:204]   - Generating certificates and keys ...
	I1128 02:55:14.368310  349264 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 02:55:14.368379  349264 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 02:55:14.368481  349264 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1128 02:55:14.368569  349264 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1128 02:55:14.368662  349264 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1128 02:55:14.368746  349264 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1128 02:55:14.368804  349264 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1128 02:55:14.368961  349264 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-648725 localhost] and IPs [192.168.39.42 127.0.0.1 ::1]
	I1128 02:55:14.369033  349264 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1128 02:55:14.369192  349264 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-648725 localhost] and IPs [192.168.39.42 127.0.0.1 ::1]
	I1128 02:55:14.369246  349264 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1128 02:55:14.369301  349264 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1128 02:55:14.369376  349264 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1128 02:55:14.369456  349264 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 02:55:14.369543  349264 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 02:55:14.369629  349264 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 02:55:14.369729  349264 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 02:55:14.369811  349264 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 02:55:14.369902  349264 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 02:55:14.371366  349264 out.go:204]   - Booting up control plane ...
	I1128 02:55:14.371454  349264 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 02:55:14.371546  349264 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 02:55:14.371634  349264 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 02:55:14.371715  349264 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 02:55:14.371880  349264 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 02:55:14.371964  349264 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.004539 seconds
	I1128 02:55:14.372101  349264 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 02:55:14.372214  349264 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 02:55:14.372267  349264 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 02:55:14.372381  349264 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-648725 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1128 02:55:14.372426  349264 kubeadm.go:322] [bootstrap-token] Using token: ns8860.lfol45skt58f7wx5
	I1128 02:55:14.373861  349264 out.go:204]   - Configuring RBAC rules ...
	I1128 02:55:14.374002  349264 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 02:55:14.374109  349264 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 02:55:14.374264  349264 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 02:55:14.374408  349264 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 02:55:14.374523  349264 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 02:55:14.374594  349264 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 02:55:14.374729  349264 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 02:55:14.374783  349264 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 02:55:14.374860  349264 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 02:55:14.374870  349264 kubeadm.go:322] 
	I1128 02:55:14.374950  349264 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 02:55:14.374970  349264 kubeadm.go:322] 
	I1128 02:55:14.375054  349264 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 02:55:14.375069  349264 kubeadm.go:322] 
	I1128 02:55:14.375104  349264 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 02:55:14.375187  349264 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 02:55:14.375236  349264 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 02:55:14.375245  349264 kubeadm.go:322] 
	I1128 02:55:14.375289  349264 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 02:55:14.375366  349264 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 02:55:14.375423  349264 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 02:55:14.375429  349264 kubeadm.go:322] 
	I1128 02:55:14.375519  349264 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 02:55:14.375598  349264 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 02:55:14.375605  349264 kubeadm.go:322] 
	I1128 02:55:14.375673  349264 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token ns8860.lfol45skt58f7wx5 \
	I1128 02:55:14.375757  349264 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 02:55:14.375781  349264 kubeadm.go:322]     --control-plane 
	I1128 02:55:14.375786  349264 kubeadm.go:322] 
	I1128 02:55:14.375869  349264 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 02:55:14.375879  349264 kubeadm.go:322] 
	I1128 02:55:14.375985  349264 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token ns8860.lfol45skt58f7wx5 \
	I1128 02:55:14.376151  349264 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 02:55:14.376177  349264 cni.go:84] Creating CNI manager for ""
	I1128 02:55:14.376187  349264 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 02:55:14.377747  349264 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 02:55:14.379135  349264 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 02:55:14.389722  349264 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 02:55:14.409927  349264 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 02:55:14.410006  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:14.410032  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=ingress-addon-legacy-648725 minikube.k8s.io/updated_at=2023_11_28T02_55_14_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:14.603648  349264 ops.go:34] apiserver oom_adj: -16
	I1128 02:55:14.603813  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:14.814935  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:15.446510  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:15.946889  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:16.446371  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:16.946056  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:17.446233  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:17.946264  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:18.446261  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:18.946118  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:19.446678  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:19.946033  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:20.446924  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:20.946776  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:21.446481  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:21.946112  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:22.446745  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:22.946616  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:23.446211  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:23.946568  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:24.446813  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:24.946692  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:25.446677  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:25.946095  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:26.446629  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:26.946315  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:27.446058  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:27.946103  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:28.446617  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:28.946612  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:29.446693  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:29.946227  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:30.446937  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:30.946472  349264 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 02:55:31.084779  349264 kubeadm.go:1081] duration metric: took 16.674848488s to wait for elevateKubeSystemPrivileges.
	I1128 02:55:31.084815  349264 kubeadm.go:406] StartCluster complete in 30.113687435s
	I1128 02:55:31.084839  349264 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:55:31.084939  349264 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 02:55:31.085836  349264 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 02:55:31.086081  349264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 02:55:31.086200  349264 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 02:55:31.086307  349264 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-648725"
	I1128 02:55:31.086329  349264 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-648725"
	I1128 02:55:31.086340  349264 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-648725"
	I1128 02:55:31.086356  349264 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-648725"
	I1128 02:55:31.086359  349264 config.go:182] Loaded profile config "ingress-addon-legacy-648725": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I1128 02:55:31.086406  349264 host.go:66] Checking if "ingress-addon-legacy-648725" exists ...
	I1128 02:55:31.086802  349264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:55:31.086810  349264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:55:31.086866  349264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:55:31.086887  349264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:55:31.086896  349264 kapi.go:59] client config for ingress-addon-legacy-648725: &rest.Config{Host:"https://192.168.39.42:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 02:55:31.087759  349264 cert_rotation.go:137] Starting client certificate rotation controller
	I1128 02:55:31.102884  349264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44011
	I1128 02:55:31.102892  349264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45759
	I1128 02:55:31.103376  349264 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:55:31.103381  349264 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:55:31.103864  349264 main.go:141] libmachine: Using API Version  1
	I1128 02:55:31.103880  349264 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:55:31.104029  349264 main.go:141] libmachine: Using API Version  1
	I1128 02:55:31.104052  349264 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:55:31.104261  349264 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:55:31.104399  349264 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:55:31.104590  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetState
	I1128 02:55:31.104847  349264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:55:31.104900  349264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:55:31.107234  349264 kapi.go:59] client config for ingress-addon-legacy-648725: &rest.Config{Host:"https://192.168.39.42:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 02:55:31.107586  349264 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-648725"
	I1128 02:55:31.107629  349264 host.go:66] Checking if "ingress-addon-legacy-648725" exists ...
	I1128 02:55:31.108048  349264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:55:31.108113  349264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:55:31.120792  349264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41315
	I1128 02:55:31.121251  349264 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:55:31.121770  349264 main.go:141] libmachine: Using API Version  1
	I1128 02:55:31.121796  349264 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:55:31.122168  349264 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:55:31.122350  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetState
	I1128 02:55:31.122649  349264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45221
	I1128 02:55:31.123145  349264 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:55:31.123597  349264 main.go:141] libmachine: Using API Version  1
	I1128 02:55:31.123634  349264 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:55:31.123998  349264 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:55:31.124263  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .DriverName
	I1128 02:55:31.126255  349264 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 02:55:31.124560  349264 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:55:31.126306  349264 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:55:31.127891  349264 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 02:55:31.127914  349264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 02:55:31.127936  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHHostname
	I1128 02:55:31.131480  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:55:31.131934  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:55:31.131955  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:55:31.132118  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHPort
	I1128 02:55:31.132308  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:55:31.132482  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHUsername
	I1128 02:55:31.132625  349264 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725/id_rsa Username:docker}
	I1128 02:55:31.140813  349264 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45717
	I1128 02:55:31.141211  349264 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:55:31.141744  349264 main.go:141] libmachine: Using API Version  1
	I1128 02:55:31.141772  349264 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:55:31.142157  349264 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:55:31.142367  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetState
	I1128 02:55:31.143821  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .DriverName
	I1128 02:55:31.144102  349264 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 02:55:31.144122  349264 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 02:55:31.144141  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHHostname
	I1128 02:55:31.146673  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:55:31.147090  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8c:99:b8", ip: ""} in network mk-ingress-addon-legacy-648725: {Iface:virbr1 ExpiryTime:2023-11-28 03:54:44 +0000 UTC Type:0 Mac:52:54:00:8c:99:b8 Iaid: IPaddr:192.168.39.42 Prefix:24 Hostname:ingress-addon-legacy-648725 Clientid:01:52:54:00:8c:99:b8}
	I1128 02:55:31.147121  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | domain ingress-addon-legacy-648725 has defined IP address 192.168.39.42 and MAC address 52:54:00:8c:99:b8 in network mk-ingress-addon-legacy-648725
	I1128 02:55:31.147245  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHPort
	I1128 02:55:31.147404  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHKeyPath
	I1128 02:55:31.147567  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .GetSSHUsername
	I1128 02:55:31.147728  349264 sshutil.go:53] new ssh client: &{IP:192.168.39.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/ingress-addon-legacy-648725/id_rsa Username:docker}
	I1128 02:55:31.182866  349264 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-648725" context rescaled to 1 replicas
	I1128 02:55:31.182911  349264 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.42 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 02:55:31.184718  349264 out.go:177] * Verifying Kubernetes components...
	I1128 02:55:31.186134  349264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 02:55:31.368236  349264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 02:55:31.459419  349264 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 02:55:31.485942  349264 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 02:55:31.486516  349264 kapi.go:59] client config for ingress-addon-legacy-648725: &rest.Config{Host:"https://192.168.39.42:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 02:55:31.486864  349264 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-648725" to be "Ready" ...
	I1128 02:55:31.614449  349264 node_ready.go:49] node "ingress-addon-legacy-648725" has status "Ready":"True"
	I1128 02:55:31.614483  349264 node_ready.go:38] duration metric: took 127.59379ms waiting for node "ingress-addon-legacy-648725" to be "Ready" ...
	I1128 02:55:31.614500  349264 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 02:55:31.646605  349264 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-9pn95" in "kube-system" namespace to be "Ready" ...
	I1128 02:55:32.030716  349264 main.go:141] libmachine: Making call to close driver server
	I1128 02:55:32.030751  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .Close
	I1128 02:55:32.031079  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Closing plugin on server side
	I1128 02:55:32.031117  349264 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:55:32.031135  349264 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:55:32.031147  349264 main.go:141] libmachine: Making call to close driver server
	I1128 02:55:32.031159  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .Close
	I1128 02:55:32.031449  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Closing plugin on server side
	I1128 02:55:32.031483  349264 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:55:32.031499  349264 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:55:32.045554  349264 main.go:141] libmachine: Making call to close driver server
	I1128 02:55:32.045577  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .Close
	I1128 02:55:32.045875  349264 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:55:32.045895  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Closing plugin on server side
	I1128 02:55:32.045900  349264 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:55:32.245173  349264 main.go:141] libmachine: Making call to close driver server
	I1128 02:55:32.245200  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .Close
	I1128 02:55:32.245194  349264 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 02:55:32.245619  349264 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:55:32.245650  349264 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:55:32.245670  349264 main.go:141] libmachine: Making call to close driver server
	I1128 02:55:32.245680  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) Calling .Close
	I1128 02:55:32.245924  349264 main.go:141] libmachine: Successfully made call to close driver server
	I1128 02:55:32.245946  349264 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 02:55:32.245945  349264 main.go:141] libmachine: (ingress-addon-legacy-648725) DBG | Closing plugin on server side
	I1128 02:55:32.247855  349264 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1128 02:55:32.249751  349264 addons.go:502] enable addons completed in 1.163547104s: enabled=[default-storageclass storage-provisioner]
	I1128 02:55:34.025802  349264 pod_ready.go:102] pod "coredns-66bff467f8-9pn95" in "kube-system" namespace has status "Ready":"False"
	I1128 02:55:35.022238  349264 pod_ready.go:97] error getting pod "coredns-66bff467f8-9pn95" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-9pn95" not found
	I1128 02:55:35.022271  349264 pod_ready.go:81] duration metric: took 3.375626533s waiting for pod "coredns-66bff467f8-9pn95" in "kube-system" namespace to be "Ready" ...
	E1128 02:55:35.022283  349264 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-66bff467f8-9pn95" in "kube-system" namespace (skipping!): pods "coredns-66bff467f8-9pn95" not found
	I1128 02:55:35.022288  349264 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-slnj6" in "kube-system" namespace to be "Ready" ...
	I1128 02:55:37.040367  349264 pod_ready.go:102] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"False"
	I1128 02:55:39.040749  349264 pod_ready.go:102] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"False"
	I1128 02:55:41.041061  349264 pod_ready.go:102] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"False"
	I1128 02:55:43.541489  349264 pod_ready.go:102] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"False"
	I1128 02:55:46.041413  349264 pod_ready.go:102] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"False"
	I1128 02:55:48.041715  349264 pod_ready.go:102] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"False"
	I1128 02:55:50.042129  349264 pod_ready.go:102] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"False"
	I1128 02:55:52.539998  349264 pod_ready.go:102] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"False"
	I1128 02:55:54.540823  349264 pod_ready.go:102] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"False"
	I1128 02:55:56.540908  349264 pod_ready.go:102] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"False"
	I1128 02:55:58.541007  349264 pod_ready.go:102] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"False"
	I1128 02:56:00.541351  349264 pod_ready.go:102] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"False"
	I1128 02:56:03.040322  349264 pod_ready.go:102] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"False"
	I1128 02:56:05.042020  349264 pod_ready.go:102] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"False"
	I1128 02:56:07.540973  349264 pod_ready.go:102] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"False"
	I1128 02:56:08.540683  349264 pod_ready.go:92] pod "coredns-66bff467f8-slnj6" in "kube-system" namespace has status "Ready":"True"
	I1128 02:56:08.540710  349264 pod_ready.go:81] duration metric: took 33.518415484s waiting for pod "coredns-66bff467f8-slnj6" in "kube-system" namespace to be "Ready" ...
	I1128 02:56:08.540720  349264 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-648725" in "kube-system" namespace to be "Ready" ...
	I1128 02:56:08.545674  349264 pod_ready.go:92] pod "etcd-ingress-addon-legacy-648725" in "kube-system" namespace has status "Ready":"True"
	I1128 02:56:08.545692  349264 pod_ready.go:81] duration metric: took 4.965689ms waiting for pod "etcd-ingress-addon-legacy-648725" in "kube-system" namespace to be "Ready" ...
	I1128 02:56:08.545701  349264 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-648725" in "kube-system" namespace to be "Ready" ...
	I1128 02:56:08.550859  349264 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-648725" in "kube-system" namespace has status "Ready":"True"
	I1128 02:56:08.550876  349264 pod_ready.go:81] duration metric: took 5.167194ms waiting for pod "kube-apiserver-ingress-addon-legacy-648725" in "kube-system" namespace to be "Ready" ...
	I1128 02:56:08.550884  349264 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-648725" in "kube-system" namespace to be "Ready" ...
	I1128 02:56:08.555762  349264 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-648725" in "kube-system" namespace has status "Ready":"True"
	I1128 02:56:08.555779  349264 pod_ready.go:81] duration metric: took 4.888901ms waiting for pod "kube-controller-manager-ingress-addon-legacy-648725" in "kube-system" namespace to be "Ready" ...
	I1128 02:56:08.555787  349264 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6sb67" in "kube-system" namespace to be "Ready" ...
	I1128 02:56:08.560893  349264 pod_ready.go:92] pod "kube-proxy-6sb67" in "kube-system" namespace has status "Ready":"True"
	I1128 02:56:08.560910  349264 pod_ready.go:81] duration metric: took 5.117641ms waiting for pod "kube-proxy-6sb67" in "kube-system" namespace to be "Ready" ...
	I1128 02:56:08.560918  349264 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-648725" in "kube-system" namespace to be "Ready" ...
	I1128 02:56:08.735360  349264 request.go:629] Waited for 174.350094ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.42:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-648725
	I1128 02:56:08.935397  349264 request.go:629] Waited for 196.438609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.42:8443/api/v1/nodes/ingress-addon-legacy-648725
	I1128 02:56:08.939144  349264 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-648725" in "kube-system" namespace has status "Ready":"True"
	I1128 02:56:08.939173  349264 pod_ready.go:81] duration metric: took 378.244782ms waiting for pod "kube-scheduler-ingress-addon-legacy-648725" in "kube-system" namespace to be "Ready" ...
	I1128 02:56:08.939184  349264 pod_ready.go:38] duration metric: took 37.324665576s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 02:56:08.939204  349264 api_server.go:52] waiting for apiserver process to appear ...
	I1128 02:56:08.939301  349264 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 02:56:08.954326  349264 api_server.go:72] duration metric: took 37.771371806s to wait for apiserver process to appear ...
	I1128 02:56:08.954354  349264 api_server.go:88] waiting for apiserver healthz status ...
	I1128 02:56:08.954370  349264 api_server.go:253] Checking apiserver healthz at https://192.168.39.42:8443/healthz ...
	I1128 02:56:08.960602  349264 api_server.go:279] https://192.168.39.42:8443/healthz returned 200:
	ok
	I1128 02:56:08.961628  349264 api_server.go:141] control plane version: v1.18.20
	I1128 02:56:08.961651  349264 api_server.go:131] duration metric: took 7.291121ms to wait for apiserver health ...
	I1128 02:56:08.961659  349264 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 02:56:09.135097  349264 request.go:629] Waited for 173.319201ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.42:8443/api/v1/namespaces/kube-system/pods
	I1128 02:56:09.140309  349264 system_pods.go:59] 7 kube-system pods found
	I1128 02:56:09.140343  349264 system_pods.go:61] "coredns-66bff467f8-slnj6" [0859d02b-7238-4761-9de1-8b3f685b3bc0] Running
	I1128 02:56:09.140348  349264 system_pods.go:61] "etcd-ingress-addon-legacy-648725" [e26d2e37-54be-4af7-97fe-bf0af7ed16ab] Running
	I1128 02:56:09.140353  349264 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-648725" [46706dce-2ff5-4cec-a3a0-9dad4ab4ead0] Running
	I1128 02:56:09.140357  349264 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-648725" [d572e198-7518-4c31-88d5-5f1987af7d1d] Running
	I1128 02:56:09.140361  349264 system_pods.go:61] "kube-proxy-6sb67" [5b9989da-4aa5-46be-b402-1b0b637e5be8] Running
	I1128 02:56:09.140371  349264 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-648725" [b8e03ff8-dd45-4ed7-bdfa-fd7b2aeb9b08] Running
	I1128 02:56:09.140381  349264 system_pods.go:61] "storage-provisioner" [b2f4de3a-7696-4e71-9d9b-831c81026424] Running
	I1128 02:56:09.140387  349264 system_pods.go:74] duration metric: took 178.721131ms to wait for pod list to return data ...
	I1128 02:56:09.140397  349264 default_sa.go:34] waiting for default service account to be created ...
	I1128 02:56:09.334807  349264 request.go:629] Waited for 194.326294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.42:8443/api/v1/namespaces/default/serviceaccounts
	I1128 02:56:09.338147  349264 default_sa.go:45] found service account: "default"
	I1128 02:56:09.338177  349264 default_sa.go:55] duration metric: took 197.761984ms for default service account to be created ...
	I1128 02:56:09.338186  349264 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 02:56:09.535657  349264 request.go:629] Waited for 197.386895ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.42:8443/api/v1/namespaces/kube-system/pods
	I1128 02:56:09.540783  349264 system_pods.go:86] 7 kube-system pods found
	I1128 02:56:09.540817  349264 system_pods.go:89] "coredns-66bff467f8-slnj6" [0859d02b-7238-4761-9de1-8b3f685b3bc0] Running
	I1128 02:56:09.540823  349264 system_pods.go:89] "etcd-ingress-addon-legacy-648725" [e26d2e37-54be-4af7-97fe-bf0af7ed16ab] Running
	I1128 02:56:09.540828  349264 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-648725" [46706dce-2ff5-4cec-a3a0-9dad4ab4ead0] Running
	I1128 02:56:09.540832  349264 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-648725" [d572e198-7518-4c31-88d5-5f1987af7d1d] Running
	I1128 02:56:09.540838  349264 system_pods.go:89] "kube-proxy-6sb67" [5b9989da-4aa5-46be-b402-1b0b637e5be8] Running
	I1128 02:56:09.540843  349264 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-648725" [b8e03ff8-dd45-4ed7-bdfa-fd7b2aeb9b08] Running
	I1128 02:56:09.540847  349264 system_pods.go:89] "storage-provisioner" [b2f4de3a-7696-4e71-9d9b-831c81026424] Running
	I1128 02:56:09.540853  349264 system_pods.go:126] duration metric: took 202.662186ms to wait for k8s-apps to be running ...
	I1128 02:56:09.540867  349264 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 02:56:09.541005  349264 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 02:56:09.556989  349264 system_svc.go:56] duration metric: took 16.111645ms WaitForService to wait for kubelet.
	I1128 02:56:09.557022  349264 kubeadm.go:581] duration metric: took 38.374076231s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 02:56:09.557041  349264 node_conditions.go:102] verifying NodePressure condition ...
	I1128 02:56:09.735505  349264 request.go:629] Waited for 178.368845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.42:8443/api/v1/nodes
	I1128 02:56:09.739060  349264 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 02:56:09.739091  349264 node_conditions.go:123] node cpu capacity is 2
	I1128 02:56:09.739102  349264 node_conditions.go:105] duration metric: took 182.056421ms to run NodePressure ...
	I1128 02:56:09.739114  349264 start.go:228] waiting for startup goroutines ...
	I1128 02:56:09.739121  349264 start.go:233] waiting for cluster config update ...
	I1128 02:56:09.739131  349264 start.go:242] writing updated cluster config ...
	I1128 02:56:09.739427  349264 ssh_runner.go:195] Run: rm -f paused
	I1128 02:56:09.789742  349264 start.go:600] kubectl: 1.28.4, cluster: 1.18.20 (minor skew: 10)
	I1128 02:56:09.791657  349264 out.go:177] 
	W1128 02:56:09.792856  349264 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.18.20.
	I1128 02:56:09.794253  349264 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I1128 02:56:09.795583  349264 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-648725" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 02:54:41 UTC, ends at Tue 2023-11-28 02:59:11 UTC. --
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.530626793Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701140351530612192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=a464bbae-9b35-43ab-9524-2f8455cdf3f0 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.531294387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6afd3ec3-6e67-405c-b00a-a238a4454145 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.531346695Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6afd3ec3-6e67-405c-b00a-a238a4454145 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.531623685Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8723517f29ade8661b9723888e020c4328fa2783d094fed31af87f642ab5add8,PodSandboxId:12aa5bfaacfce4ffb46409a4a001f4a9100422f7ee43c021391b62546bddd570,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701140343559660562,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-slfsj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eaaf8ff5-3f4b-4497-a568-6d0fda91c62e,},Annotations:map[string]string{io.kubernetes.container.hash: 8c2d242e,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17f3588f248ce75026fdf2ef865bab26715964908a984364b63cefef6176bac,PodSandboxId:6b9955a0ba01335dcaf9ff53a3e9261ce977d3651038e43cd75dc60f4f5e4644,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701140203272158144,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a2a23d3-f913-4a78-bbb4-769b100dfb31,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f5143a8a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:054dbe2f43355ca172e7b04649e8de46fda41c762dcb29c837fdb7e89e48170d,PodSandboxId:b8d89caf58474144755e9bd424880cc22447235f91859c140d114e67bc4bf05a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701140181994262534,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-sv6sz,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 452ec6e9-c9c5-4cd9-8d42-2956b69020c2,},Annotations:map[string]string{io.kubernetes.container.hash: 78e4615,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ceb95bb93b7cb3223d3952adb1aa02099086a9f5cd627c3448d0f98f41fbb672,PodSandboxId:b31ce1fff77d651208817755a1482c52751a11e1f7fd72c572429ee95439e116,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701140174224747718,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k7mzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bed076c2-53d1-4bdd-a7db-154662175deb,},Annotations:map[string]string{io.kubernetes.container.hash: 9af157ab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a421761e75a00a0e3258c0d8f72ac3f6f4c781c136941cbad8e2c8239ba8f5e,PodSandboxId:29dda7444fc8a44f261d6ee86b14fc9da2dbb4115b79dea0f655a0db698b78ca,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701140173771463046,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5k2vd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de1e29b9-bbf0-4ed4-b2b1-188383ef6159,},Annotations:map[string]string{io.kubernetes.container.hash: cb29cb8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac01131533d10885bb870f56d067ddf5c57af56ffb462f24c3e4df6fa0988385,PodSandboxId:afc452f785c9dea28a1a7dbba5a1fd68e446390d1236c0ec02377e7a690a0579,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701140133120826984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2f4de3a-7696-4e71-9d9b-831c81026424,},Annotations:map[string]string{io.kubernetes.container.hash: dc87775d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5ee5ef5ca4f34df1f75b5413c044a8c6a7c40261044b8add56ae00723e622b,PodSandboxId:4945f565e9fdee5a48f079c5973b0321b6fd9e8695f6ea39976f00f7136465f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701140132549258405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6sb67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b9989da-4aa5-46be-b402-1b0b637e5be8,},Annotations:map[string]string{io.kubernetes.container.hash: d6d8772a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e177f304e0fda55130a605967fe1ca8a5d229abbe6fbc3a762dfbf20cc7d51,PodSandboxId:4cba59d5a9e96888b447803ce75dd3b75197befb295c83f4cdc4ca81f9df7b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00
a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701140131743907928,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-slnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0859d02b-7238-4761-9de1-8b3f685b3bc0,},Annotations:map[string]string{io.kubernetes.container.hash: 9062d1ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3457e04817f90ca04ded8eff4f4e3f786e6cb5373a392407ad15ffbe9a4904,PodS
andboxId:734cde3e774951fd1c3d277f268c2806ee23e2e0791a556f5eaa794653842a0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701140107244675143,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 850336e8b50fb01f7869155b6ae2a4c5,},Annotations:map[string]string{io.kubernetes.container.hash: 4205d7bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743f9cd163d5a20b3f1c10e061ccb38c26f5ce58c3282e5ba6fa78ccb18339f1,PodSandboxId:d351f69644aa132054b0b84a2c698b6e2db8a
b3b577bdaa10941333625b943f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701140105735312861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d80fbea4d08417510e788c5aee8b3722d77e57511a4ee3ef123b8ad53443979,PodSandboxId:0dbc54b
7fea2a7ea2b3b68b34dd2c183ffec8f17790e6d9adcc0fe4461fd0d27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701140105693644946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9f6548b360eb9617c6f3cbc869f840c,},Annotations:map[string]string{io.kubernetes.container.hash: 5414229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0bb753d0684a5894c2b45fa8a53472362f5ecdece22171213e9b520cb05bf2c,PodSandboxId:0a4f6a75901d7c
79106daf5394b548d69afac3bf9f26a8a47bf69c15a4b419b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701140105649521980,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6afd3ec3-6e67-405c-b00a-a238a4454145 name=/runtime.v1.RuntimeServi
ce/ListContainers
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.569885909Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=258873ee-7491-4c1a-b63d-256c3654ccb9 name=/runtime.v1.RuntimeService/Version
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.569941536Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=258873ee-7491-4c1a-b63d-256c3654ccb9 name=/runtime.v1.RuntimeService/Version
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.571522027Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5b9b740c-d6d8-47f4-b59c-da50fde5b636 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.572080757Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701140351572063498,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=5b9b740c-d6d8-47f4-b59c-da50fde5b636 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.572573787Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9fd005c7-568a-4f3e-92f3-817f8f2514a6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.572616882Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9fd005c7-568a-4f3e-92f3-817f8f2514a6 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.572869847Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8723517f29ade8661b9723888e020c4328fa2783d094fed31af87f642ab5add8,PodSandboxId:12aa5bfaacfce4ffb46409a4a001f4a9100422f7ee43c021391b62546bddd570,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701140343559660562,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-slfsj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eaaf8ff5-3f4b-4497-a568-6d0fda91c62e,},Annotations:map[string]string{io.kubernetes.container.hash: 8c2d242e,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17f3588f248ce75026fdf2ef865bab26715964908a984364b63cefef6176bac,PodSandboxId:6b9955a0ba01335dcaf9ff53a3e9261ce977d3651038e43cd75dc60f4f5e4644,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701140203272158144,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a2a23d3-f913-4a78-bbb4-769b100dfb31,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f5143a8a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:054dbe2f43355ca172e7b04649e8de46fda41c762dcb29c837fdb7e89e48170d,PodSandboxId:b8d89caf58474144755e9bd424880cc22447235f91859c140d114e67bc4bf05a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701140181994262534,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-sv6sz,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 452ec6e9-c9c5-4cd9-8d42-2956b69020c2,},Annotations:map[string]string{io.kubernetes.container.hash: 78e4615,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ceb95bb93b7cb3223d3952adb1aa02099086a9f5cd627c3448d0f98f41fbb672,PodSandboxId:b31ce1fff77d651208817755a1482c52751a11e1f7fd72c572429ee95439e116,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701140174224747718,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k7mzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bed076c2-53d1-4bdd-a7db-154662175deb,},Annotations:map[string]string{io.kubernetes.container.hash: 9af157ab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a421761e75a00a0e3258c0d8f72ac3f6f4c781c136941cbad8e2c8239ba8f5e,PodSandboxId:29dda7444fc8a44f261d6ee86b14fc9da2dbb4115b79dea0f655a0db698b78ca,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701140173771463046,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5k2vd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de1e29b9-bbf0-4ed4-b2b1-188383ef6159,},Annotations:map[string]string{io.kubernetes.container.hash: cb29cb8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac01131533d10885bb870f56d067ddf5c57af56ffb462f24c3e4df6fa0988385,PodSandboxId:afc452f785c9dea28a1a7dbba5a1fd68e446390d1236c0ec02377e7a690a0579,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701140133120826984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2f4de3a-7696-4e71-9d9b-831c81026424,},Annotations:map[string]string{io.kubernetes.container.hash: dc87775d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5ee5ef5ca4f34df1f75b5413c044a8c6a7c40261044b8add56ae00723e622b,PodSandboxId:4945f565e9fdee5a48f079c5973b0321b6fd9e8695f6ea39976f00f7136465f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701140132549258405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6sb67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b9989da-4aa5-46be-b402-1b0b637e5be8,},Annotations:map[string]string{io.kubernetes.container.hash: d6d8772a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e177f304e0fda55130a605967fe1ca8a5d229abbe6fbc3a762dfbf20cc7d51,PodSandboxId:4cba59d5a9e96888b447803ce75dd3b75197befb295c83f4cdc4ca81f9df7b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00
a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701140131743907928,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-slnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0859d02b-7238-4761-9de1-8b3f685b3bc0,},Annotations:map[string]string{io.kubernetes.container.hash: 9062d1ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3457e04817f90ca04ded8eff4f4e3f786e6cb5373a392407ad15ffbe9a4904,PodS
andboxId:734cde3e774951fd1c3d277f268c2806ee23e2e0791a556f5eaa794653842a0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701140107244675143,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 850336e8b50fb01f7869155b6ae2a4c5,},Annotations:map[string]string{io.kubernetes.container.hash: 4205d7bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743f9cd163d5a20b3f1c10e061ccb38c26f5ce58c3282e5ba6fa78ccb18339f1,PodSandboxId:d351f69644aa132054b0b84a2c698b6e2db8a
b3b577bdaa10941333625b943f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701140105735312861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d80fbea4d08417510e788c5aee8b3722d77e57511a4ee3ef123b8ad53443979,PodSandboxId:0dbc54b
7fea2a7ea2b3b68b34dd2c183ffec8f17790e6d9adcc0fe4461fd0d27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701140105693644946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9f6548b360eb9617c6f3cbc869f840c,},Annotations:map[string]string{io.kubernetes.container.hash: 5414229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0bb753d0684a5894c2b45fa8a53472362f5ecdece22171213e9b520cb05bf2c,PodSandboxId:0a4f6a75901d7c
79106daf5394b548d69afac3bf9f26a8a47bf69c15a4b419b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701140105649521980,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9fd005c7-568a-4f3e-92f3-817f8f2514a6 name=/runtime.v1.RuntimeServi
ce/ListContainers
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.613745542Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b7c86f6c-ca49-4506-9ded-ac6cb14a6c16 name=/runtime.v1.RuntimeService/Version
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.613802803Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b7c86f6c-ca49-4506-9ded-ac6cb14a6c16 name=/runtime.v1.RuntimeService/Version
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.614905827Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=738e8ddc-417b-4773-89c5-8cfd590361cc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.615466704Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701140351615453187,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=738e8ddc-417b-4773-89c5-8cfd590361cc name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.616184152Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6652a218-20a1-49ef-ace6-79d595dae3da name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.616228909Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6652a218-20a1-49ef-ace6-79d595dae3da name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.616504916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8723517f29ade8661b9723888e020c4328fa2783d094fed31af87f642ab5add8,PodSandboxId:12aa5bfaacfce4ffb46409a4a001f4a9100422f7ee43c021391b62546bddd570,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701140343559660562,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-slfsj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eaaf8ff5-3f4b-4497-a568-6d0fda91c62e,},Annotations:map[string]string{io.kubernetes.container.hash: 8c2d242e,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17f3588f248ce75026fdf2ef865bab26715964908a984364b63cefef6176bac,PodSandboxId:6b9955a0ba01335dcaf9ff53a3e9261ce977d3651038e43cd75dc60f4f5e4644,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701140203272158144,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a2a23d3-f913-4a78-bbb4-769b100dfb31,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f5143a8a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:054dbe2f43355ca172e7b04649e8de46fda41c762dcb29c837fdb7e89e48170d,PodSandboxId:b8d89caf58474144755e9bd424880cc22447235f91859c140d114e67bc4bf05a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701140181994262534,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-sv6sz,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 452ec6e9-c9c5-4cd9-8d42-2956b69020c2,},Annotations:map[string]string{io.kubernetes.container.hash: 78e4615,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ceb95bb93b7cb3223d3952adb1aa02099086a9f5cd627c3448d0f98f41fbb672,PodSandboxId:b31ce1fff77d651208817755a1482c52751a11e1f7fd72c572429ee95439e116,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701140174224747718,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k7mzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bed076c2-53d1-4bdd-a7db-154662175deb,},Annotations:map[string]string{io.kubernetes.container.hash: 9af157ab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a421761e75a00a0e3258c0d8f72ac3f6f4c781c136941cbad8e2c8239ba8f5e,PodSandboxId:29dda7444fc8a44f261d6ee86b14fc9da2dbb4115b79dea0f655a0db698b78ca,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701140173771463046,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5k2vd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de1e29b9-bbf0-4ed4-b2b1-188383ef6159,},Annotations:map[string]string{io.kubernetes.container.hash: cb29cb8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac01131533d10885bb870f56d067ddf5c57af56ffb462f24c3e4df6fa0988385,PodSandboxId:afc452f785c9dea28a1a7dbba5a1fd68e446390d1236c0ec02377e7a690a0579,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701140133120826984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2f4de3a-7696-4e71-9d9b-831c81026424,},Annotations:map[string]string{io.kubernetes.container.hash: dc87775d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5ee5ef5ca4f34df1f75b5413c044a8c6a7c40261044b8add56ae00723e622b,PodSandboxId:4945f565e9fdee5a48f079c5973b0321b6fd9e8695f6ea39976f00f7136465f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701140132549258405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6sb67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b9989da-4aa5-46be-b402-1b0b637e5be8,},Annotations:map[string]string{io.kubernetes.container.hash: d6d8772a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e177f304e0fda55130a605967fe1ca8a5d229abbe6fbc3a762dfbf20cc7d51,PodSandboxId:4cba59d5a9e96888b447803ce75dd3b75197befb295c83f4cdc4ca81f9df7b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00
a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701140131743907928,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-slnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0859d02b-7238-4761-9de1-8b3f685b3bc0,},Annotations:map[string]string{io.kubernetes.container.hash: 9062d1ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3457e04817f90ca04ded8eff4f4e3f786e6cb5373a392407ad15ffbe9a4904,PodS
andboxId:734cde3e774951fd1c3d277f268c2806ee23e2e0791a556f5eaa794653842a0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701140107244675143,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 850336e8b50fb01f7869155b6ae2a4c5,},Annotations:map[string]string{io.kubernetes.container.hash: 4205d7bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743f9cd163d5a20b3f1c10e061ccb38c26f5ce58c3282e5ba6fa78ccb18339f1,PodSandboxId:d351f69644aa132054b0b84a2c698b6e2db8a
b3b577bdaa10941333625b943f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701140105735312861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d80fbea4d08417510e788c5aee8b3722d77e57511a4ee3ef123b8ad53443979,PodSandboxId:0dbc54b
7fea2a7ea2b3b68b34dd2c183ffec8f17790e6d9adcc0fe4461fd0d27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701140105693644946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9f6548b360eb9617c6f3cbc869f840c,},Annotations:map[string]string{io.kubernetes.container.hash: 5414229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0bb753d0684a5894c2b45fa8a53472362f5ecdece22171213e9b520cb05bf2c,PodSandboxId:0a4f6a75901d7c
79106daf5394b548d69afac3bf9f26a8a47bf69c15a4b419b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701140105649521980,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6652a218-20a1-49ef-ace6-79d595dae3da name=/runtime.v1.RuntimeServi
ce/ListContainers
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.649423154Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=fdec3b97-2936-42c8-a134-ec3b00f5aefc name=/runtime.v1.RuntimeService/Version
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.649476414Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=fdec3b97-2936-42c8-a134-ec3b00f5aefc name=/runtime.v1.RuntimeService/Version
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.650740560Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c0b0ec54-8fbd-41db-b24e-7ecf037d31f8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.651284999Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701140351651268222,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:202349,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=c0b0ec54-8fbd-41db-b24e-7ecf037d31f8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.651784155Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8157bfcc-90db-4974-966d-cfb3006e8a24 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.651826617Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8157bfcc-90db-4974-966d-cfb3006e8a24 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 02:59:11 ingress-addon-legacy-648725 crio[721]: time="2023-11-28 02:59:11.652176073Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8723517f29ade8661b9723888e020c4328fa2783d094fed31af87f642ab5add8,PodSandboxId:12aa5bfaacfce4ffb46409a4a001f4a9100422f7ee43c021391b62546bddd570,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1701140343559660562,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-slfsj,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: eaaf8ff5-3f4b-4497-a568-6d0fda91c62e,},Annotations:map[string]string{io.kubernetes.container.hash: 8c2d242e,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c17f3588f248ce75026fdf2ef865bab26715964908a984364b63cefef6176bac,PodSandboxId:6b9955a0ba01335dcaf9ff53a3e9261ce977d3651038e43cd75dc60f4f5e4644,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d,State:CONTAINER_RUNNING,CreatedAt:1701140203272158144,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 0a2a23d3-f913-4a78-bbb4-769b100dfb31,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f5143a8a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:054dbe2f43355ca172e7b04649e8de46fda41c762dcb29c837fdb7e89e48170d,PodSandboxId:b8d89caf58474144755e9bd424880cc22447235f91859c140d114e67bc4bf05a,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1701140181994262534,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-sv6sz,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 452ec6e9-c9c5-4cd9-8d42-2956b69020c2,},Annotations:map[string]string{io.kubernetes.container.hash: 78e4615,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:ceb95bb93b7cb3223d3952adb1aa02099086a9f5cd627c3448d0f98f41fbb672,PodSandboxId:b31ce1fff77d651208817755a1482c52751a11e1f7fd72c572429ee95439e116,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701140174224747718,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-k7mzx,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: bed076c2-53d1-4bdd-a7db-154662175deb,},Annotations:map[string]string{io.kubernetes.container.hash: 9af157ab,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8a421761e75a00a0e3258c0d8f72ac3f6f4c781c136941cbad8e2c8239ba8f5e,PodSandboxId:29dda7444fc8a44f261d6ee86b14fc9da2dbb4115b79dea0f655a0db698b78ca,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1701140173771463046,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-5k2vd,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: de1e29b9-bbf0-4ed4-b2b1-188383ef6159,},Annotations:map[string]string{io.kubernetes.container.hash: cb29cb8b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ac01131533d10885bb870f56d067ddf5c57af56ffb462f24c3e4df6fa0988385,PodSandboxId:afc452f785c9dea28a1a7dbba5a1fd68e446390d1236c0ec02377e7a690a0579,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image
:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701140133120826984,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2f4de3a-7696-4e71-9d9b-831c81026424,},Annotations:map[string]string{io.kubernetes.container.hash: dc87775d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a5ee5ef5ca4f34df1f75b5413c044a8c6a7c40261044b8add56ae00723e622b,PodSandboxId:4945f565e9fdee5a48f079c5973b0321b6fd9e8695f6ea39976f00f7136465f9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpe
c{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1701140132549258405,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-6sb67,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b9989da-4aa5-46be-b402-1b0b637e5be8,},Annotations:map[string]string{io.kubernetes.container.hash: d6d8772a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6e177f304e0fda55130a605967fe1ca8a5d229abbe6fbc3a762dfbf20cc7d51,PodSandboxId:4cba59d5a9e96888b447803ce75dd3b75197befb295c83f4cdc4ca81f9df7b31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:67da37a9a360e600e74464da48437257b00
a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1701140131743907928,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-slnj6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0859d02b-7238-4761-9de1-8b3f685b3bc0,},Annotations:map[string]string{io.kubernetes.container.hash: 9062d1ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed3457e04817f90ca04ded8eff4f4e3f786e6cb5373a392407ad15ffbe9a4904,PodS
andboxId:734cde3e774951fd1c3d277f268c2806ee23e2e0791a556f5eaa794653842a0e,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1701140107244675143,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 850336e8b50fb01f7869155b6ae2a4c5,},Annotations:map[string]string{io.kubernetes.container.hash: 4205d7bc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:743f9cd163d5a20b3f1c10e061ccb38c26f5ce58c3282e5ba6fa78ccb18339f1,PodSandboxId:d351f69644aa132054b0b84a2c698b6e2db8a
b3b577bdaa10941333625b943f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1701140105735312861,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3d80fbea4d08417510e788c5aee8b3722d77e57511a4ee3ef123b8ad53443979,PodSandboxId:0dbc54b
7fea2a7ea2b3b68b34dd2c183ffec8f17790e6d9adcc0fe4461fd0d27,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1701140105693644946,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d9f6548b360eb9617c6f3cbc869f840c,},Annotations:map[string]string{io.kubernetes.container.hash: 5414229,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d0bb753d0684a5894c2b45fa8a53472362f5ecdece22171213e9b520cb05bf2c,PodSandboxId:0a4f6a75901d7c
79106daf5394b548d69afac3bf9f26a8a47bf69c15a4b419b3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1701140105649521980,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-648725,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8157bfcc-90db-4974-966d-cfb3006e8a24 name=/runtime.v1.RuntimeServi
ce/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8723517f29ade       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            8 seconds ago       Running             hello-world-app           0                   12aa5bfaacfce       hello-world-app-5f5d8b66bb-slfsj
	c17f3588f248c       docker.io/library/nginx@sha256:7e528502b614e1ed9f88e495f2af843c255905e0e549b935fdedd95336e6de8d                    2 minutes ago       Running             nginx                     0                   6b9955a0ba013       nginx
	054dbe2f43355       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   b8d89caf58474       ingress-nginx-controller-7fcf777cb7-sv6sz
	ceb95bb93b7cb       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              patch                     0                   b31ce1fff77d6       ingress-nginx-admission-patch-k7mzx
	8a421761e75a0       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              create                    0                   29dda7444fc8a       ingress-nginx-admission-create-5k2vd
	ac01131533d10       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   afc452f785c9d       storage-provisioner
	1a5ee5ef5ca4f       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   4945f565e9fde       kube-proxy-6sb67
	c6e177f304e0f       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   4cba59d5a9e96       coredns-66bff467f8-slnj6
	ed3457e04817f       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   4 minutes ago       Running             etcd                      0                   734cde3e77495       etcd-ingress-addon-legacy-648725
	743f9cd163d5a       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   4 minutes ago       Running             kube-controller-manager   0                   d351f69644aa1       kube-controller-manager-ingress-addon-legacy-648725
	3d80fbea4d084       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   4 minutes ago       Running             kube-apiserver            0                   0dbc54b7fea2a       kube-apiserver-ingress-addon-legacy-648725
	d0bb753d0684a       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   4 minutes ago       Running             kube-scheduler            0                   0a4f6a75901d7       kube-scheduler-ingress-addon-legacy-648725
	
	* 
	* ==> coredns [c6e177f304e0fda55130a605967fe1ca8a5d229abbe6fbc3a762dfbf20cc7d51] <==
	* [INFO] 10.244.0.6:60210 - 40191 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000056242s
	[INFO] 10.244.0.6:51339 - 63159 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00010052s
	[INFO] 10.244.0.6:60210 - 27494 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000030348s
	[INFO] 10.244.0.6:51339 - 63667 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000259738s
	[INFO] 10.244.0.6:60210 - 45626 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000042397s
	[INFO] 10.244.0.6:51339 - 15781 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000103176s
	[INFO] 10.244.0.6:60210 - 53085 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000051182s
	[INFO] 10.244.0.6:51339 - 60820 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000116412s
	[INFO] 10.244.0.6:60210 - 30333 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000148241s
	[INFO] 10.244.0.6:60210 - 39685 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000074447s
	[INFO] 10.244.0.6:60210 - 34040 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075243s
	[INFO] 10.244.0.6:55754 - 43062 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000080926s
	[INFO] 10.244.0.6:46865 - 3683 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000065891s
	[INFO] 10.244.0.6:55754 - 3491 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000048619s
	[INFO] 10.244.0.6:46865 - 51926 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000025122s
	[INFO] 10.244.0.6:55754 - 30352 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000039132s
	[INFO] 10.244.0.6:46865 - 53683 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000023141s
	[INFO] 10.244.0.6:46865 - 3212 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000023866s
	[INFO] 10.244.0.6:55754 - 2830 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000022104s
	[INFO] 10.244.0.6:46865 - 33749 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000027487s
	[INFO] 10.244.0.6:55754 - 22040 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000019955s
	[INFO] 10.244.0.6:46865 - 10488 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000038177s
	[INFO] 10.244.0.6:55754 - 16101 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.0000345s
	[INFO] 10.244.0.6:55754 - 11267 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000035031s
	[INFO] 10.244.0.6:46865 - 37069 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000023173s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-648725
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-648725
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=ingress-addon-legacy-648725
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T02_55_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 02:55:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-648725
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 02:59:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 02:56:44 +0000   Tue, 28 Nov 2023 02:55:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 02:56:44 +0000   Tue, 28 Nov 2023 02:55:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 02:56:44 +0000   Tue, 28 Nov 2023 02:55:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 02:56:44 +0000   Tue, 28 Nov 2023 02:55:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.42
	  Hostname:    ingress-addon-legacy-648725
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 a60d36f03a454491b5ea92ae8de59038
	  System UUID:                a60d36f0-3a45-4491-b5ea-92ae8de59038
	  Boot ID:                    cebeedef-81e8-48cd-8475-c9a8866b2d67
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-slfsj                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  kube-system                 coredns-66bff467f8-slnj6                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m41s
	  kube-system                 etcd-ingress-addon-legacy-648725                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-apiserver-ingress-addon-legacy-648725             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-648725    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-proxy-6sb67                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kube-scheduler-ingress-addon-legacy-648725             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From        Message
	  ----    ------                   ----                 ----        -------
	  Normal  NodeHasSufficientMemory  4m7s (x5 over 4m7s)  kubelet     Node ingress-addon-legacy-648725 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m7s (x5 over 4m7s)  kubelet     Node ingress-addon-legacy-648725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m7s (x4 over 4m7s)  kubelet     Node ingress-addon-legacy-648725 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m57s                kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m57s                kubelet     Node ingress-addon-legacy-648725 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m57s                kubelet     Node ingress-addon-legacy-648725 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m57s                kubelet     Node ingress-addon-legacy-648725 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m57s                kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m47s                kubelet     Node ingress-addon-legacy-648725 status is now: NodeReady
	  Normal  Starting                 3m39s                kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov28 02:54] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.094225] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.432731] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.387537] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141760] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +4.973563] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.211105] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.114159] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.134309] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.098608] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.198655] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[Nov28 02:55] systemd-fstab-generator[1031]: Ignoring "noauto" for root device
	[  +2.726655] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.855406] systemd-fstab-generator[1417]: Ignoring "noauto" for root device
	[ +17.098348] kauditd_printk_skb: 6 callbacks suppressed
	[Nov28 02:56] kauditd_printk_skb: 20 callbacks suppressed
	[  +6.428863] kauditd_printk_skb: 6 callbacks suppressed
	[ +23.004445] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.932206] kauditd_printk_skb: 3 callbacks suppressed
	[Nov28 02:59] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [ed3457e04817f90ca04ded8eff4f4e3f786e6cb5373a392407ad15ffbe9a4904] <==
	* raft2023/11/28 02:55:07 INFO: be5e8f7004ae306c became follower at term 1
	raft2023/11/28 02:55:07 INFO: be5e8f7004ae306c switched to configuration voters=(13717559226294743148)
	2023-11-28 02:55:07.395167 W | auth: simple token is not cryptographically signed
	2023-11-28 02:55:07.399398 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-11-28 02:55:07.401483 I | etcdserver: be5e8f7004ae306c as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/11/28 02:55:07 INFO: be5e8f7004ae306c switched to configuration voters=(13717559226294743148)
	2023-11-28 02:55:07.401972 I | etcdserver/membership: added member be5e8f7004ae306c [https://192.168.39.42:2380] to cluster beed476d98f529f8
	2023-11-28 02:55:07.402105 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-28 02:55:07.402175 I | embed: listening for peers on 192.168.39.42:2380
	2023-11-28 02:55:07.402270 I | embed: listening for metrics on http://127.0.0.1:2381
	raft2023/11/28 02:55:08 INFO: be5e8f7004ae306c is starting a new election at term 1
	raft2023/11/28 02:55:08 INFO: be5e8f7004ae306c became candidate at term 2
	raft2023/11/28 02:55:08 INFO: be5e8f7004ae306c received MsgVoteResp from be5e8f7004ae306c at term 2
	raft2023/11/28 02:55:08 INFO: be5e8f7004ae306c became leader at term 2
	raft2023/11/28 02:55:08 INFO: raft.node: be5e8f7004ae306c elected leader be5e8f7004ae306c at term 2
	2023-11-28 02:55:08.087610 I | etcdserver: published {Name:ingress-addon-legacy-648725 ClientURLs:[https://192.168.39.42:2379]} to cluster beed476d98f529f8
	2023-11-28 02:55:08.087647 I | embed: ready to serve client requests
	2023-11-28 02:55:08.088627 I | etcdserver: setting up the initial cluster version to 3.4
	2023-11-28 02:55:08.088772 I | embed: ready to serve client requests
	2023-11-28 02:55:08.089323 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-28 02:55:08.089915 I | embed: serving client requests on 192.168.39.42:2379
	2023-11-28 02:55:08.090971 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-11-28 02:55:08.091121 I | etcdserver/api: enabled capabilities for version 3.4
	2023-11-28 02:55:30.716600 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (182.928149ms) to execute
	2023-11-28 02:56:48.391402 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:2213" took too long (174.113732ms) to execute
	
	* 
	* ==> kernel <==
	*  02:59:11 up 4 min,  0 users,  load average: 0.58, 0.31, 0.13
	Linux ingress-addon-legacy-648725 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [3d80fbea4d08417510e788c5aee8b3722d77e57511a4ee3ef123b8ad53443979] <==
	* I1128 02:55:11.018988       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E1128 02:55:11.044930       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.42, ResourceVersion: 0, AdditionalErrorMsg: 
	I1128 02:55:11.097366       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I1128 02:55:11.131180       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1128 02:55:11.131229       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1128 02:55:11.131240       1 cache.go:39] Caches are synced for autoregister controller
	I1128 02:55:11.131339       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I1128 02:55:11.981721       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I1128 02:55:11.981761       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1128 02:55:11.989193       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I1128 02:55:11.994892       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I1128 02:55:11.994940       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I1128 02:55:12.466621       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1128 02:55:12.511956       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W1128 02:55:12.605245       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.42]
	I1128 02:55:12.606273       1 controller.go:609] quota admission added evaluator for: endpoints
	I1128 02:55:12.612182       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1128 02:55:13.348166       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I1128 02:55:14.226192       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I1128 02:55:14.342953       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I1128 02:55:14.672800       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I1128 02:55:30.231591       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I1128 02:55:30.810205       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I1128 02:56:10.648793       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I1128 02:56:37.914666       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	* 
	* ==> kube-controller-manager [743f9cd163d5a20b3f1c10e061ccb38c26f5ce58c3282e5ba6fa78ccb18339f1] <==
	* I1128 02:55:30.592147       1 shared_informer.go:230] Caches are synced for HPA 
	I1128 02:55:30.692906       1 shared_informer.go:230] Caches are synced for disruption 
	I1128 02:55:30.692950       1 disruption.go:339] Sending events to api server.
	I1128 02:55:30.750395       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"203fbfb1-d4bd-467b-9028-d296717fb716", APIVersion:"apps/v1", ResourceVersion:"302", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-9pn95
	I1128 02:55:30.753503       1 range_allocator.go:373] Set node ingress-addon-legacy-648725 PodCIDR to [10.244.0.0/24]
	I1128 02:55:30.794370       1 shared_informer.go:230] Caches are synced for stateful set 
	I1128 02:55:30.795753       1 shared_informer.go:230] Caches are synced for daemon sets 
	I1128 02:55:30.804629       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"203fbfb1-d4bd-467b-9028-d296717fb716", APIVersion:"apps/v1", ResourceVersion:"302", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-slnj6
	I1128 02:55:30.810754       1 shared_informer.go:230] Caches are synced for resource quota 
	I1128 02:55:30.810776       1 shared_informer.go:230] Caches are synced for attach detach 
	I1128 02:55:30.810802       1 shared_informer.go:230] Caches are synced for resource quota 
	I1128 02:55:30.874534       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"d86bb595-4725-4a8d-ba72-7cb4eff93a91", APIVersion:"apps/v1", ResourceVersion:"223", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-6sb67
	I1128 02:55:30.904887       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1128 02:55:30.904948       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1128 02:55:30.909977       1 shared_informer.go:230] Caches are synced for garbage collector 
	I1128 02:55:31.184663       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"60533e6b-f5cf-464c-8285-49079494581a", APIVersion:"apps/v1", ResourceVersion:"357", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set coredns-66bff467f8 to 1
	I1128 02:55:31.267672       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"203fbfb1-d4bd-467b-9028-d296717fb716", APIVersion:"apps/v1", ResourceVersion:"358", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: coredns-66bff467f8-9pn95
	I1128 02:56:10.619164       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"66aa7fbb-0865-42ba-8cef-755c5a69f0dd", APIVersion:"apps/v1", ResourceVersion:"469", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I1128 02:56:10.646971       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"6555ab4f-b28f-4aed-8fa0-35580b2ff029", APIVersion:"apps/v1", ResourceVersion:"470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-sv6sz
	I1128 02:56:10.718917       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"40d8793f-baae-40e0-90f2-6d0ae8696dbd", APIVersion:"batch/v1", ResourceVersion:"478", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-5k2vd
	I1128 02:56:10.815223       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"8b05a3ac-73c3-4f76-a3fa-610d3e385342", APIVersion:"batch/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-k7mzx
	I1128 02:56:14.998118       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"40d8793f-baae-40e0-90f2-6d0ae8696dbd", APIVersion:"batch/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1128 02:56:15.045531       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"8b05a3ac-73c3-4f76-a3fa-610d3e385342", APIVersion:"batch/v1", ResourceVersion:"496", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I1128 02:59:00.360375       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"a532affd-ebed-4d64-8845-ba4947a2945e", APIVersion:"apps/v1", ResourceVersion:"693", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I1128 02:59:00.383618       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"ad9d7f9c-d879-47dd-af43-1a9ca2953e67", APIVersion:"apps/v1", ResourceVersion:"694", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-slfsj
	
	* 
	* ==> kube-proxy [1a5ee5ef5ca4f34df1f75b5413c044a8c6a7c40261044b8add56ae00723e622b] <==
	* W1128 02:55:32.885898       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I1128 02:55:32.897851       1 node.go:136] Successfully retrieved node IP: 192.168.39.42
	I1128 02:55:32.897924       1 server_others.go:186] Using iptables Proxier.
	I1128 02:55:32.899442       1 server.go:583] Version: v1.18.20
	I1128 02:55:32.902103       1 config.go:315] Starting service config controller
	I1128 02:55:32.902219       1 shared_informer.go:223] Waiting for caches to sync for service config
	I1128 02:55:32.902355       1 config.go:133] Starting endpoints config controller
	I1128 02:55:32.902382       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I1128 02:55:33.002524       1 shared_informer.go:230] Caches are synced for service config 
	I1128 02:55:33.002687       1 shared_informer.go:230] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [d0bb753d0684a5894c2b45fa8a53472362f5ecdece22171213e9b520cb05bf2c] <==
	* I1128 02:55:11.106099       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I1128 02:55:11.106348       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1128 02:55:11.106377       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1128 02:55:11.106483       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E1128 02:55:11.111561       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 02:55:11.111684       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 02:55:11.111810       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 02:55:11.111904       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 02:55:11.111974       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 02:55:11.112117       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 02:55:11.112224       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 02:55:11.112295       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 02:55:11.114261       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 02:55:11.115427       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 02:55:11.115807       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 02:55:11.121487       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 02:55:11.968694       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 02:55:11.973173       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 02:55:11.973330       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 02:55:12.088079       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 02:55:12.137064       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 02:55:12.245182       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 02:55:12.253653       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 02:55:12.284908       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I1128 02:55:12.606670       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 02:54:41 UTC, ends at Tue 2023-11-28 02:59:12 UTC. --
	Nov 28 02:56:23 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:56:23.464581    1424 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Nov 28 02:56:23 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:56:23.650291    1424 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-mvg6z" (UniqueName: "kubernetes.io/secret/c500ee76-2081-4af8-8caf-a0acff9503ca-minikube-ingress-dns-token-mvg6z") pod "kube-ingress-dns-minikube" (UID: "c500ee76-2081-4af8-8caf-a0acff9503ca")
	Nov 28 02:56:38 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:56:38.096850    1424 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Nov 28 02:56:38 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:56:38.100923    1424 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-8s62g" (UniqueName: "kubernetes.io/secret/0a2a23d3-f913-4a78-bbb4-769b100dfb31-default-token-8s62g") pod "nginx" (UID: "0a2a23d3-f913-4a78-bbb4-769b100dfb31")
	Nov 28 02:56:38 ingress-addon-legacy-648725 kubelet[1424]: E1128 02:56:38.117104    1424 reflector.go:178] object-"default"/"default-token-8s62g": Failed to list *v1.Secret: secrets "default-token-8s62g" is forbidden: User "system:node:ingress-addon-legacy-648725" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "ingress-addon-legacy-648725" and this object
	Nov 28 02:56:39 ingress-addon-legacy-648725 kubelet[1424]: E1128 02:56:39.202062    1424 secret.go:195] Couldn't get secret default/default-token-8s62g: failed to sync secret cache: timed out waiting for the condition
	Nov 28 02:56:39 ingress-addon-legacy-648725 kubelet[1424]: E1128 02:56:39.202205    1424 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/0a2a23d3-f913-4a78-bbb4-769b100dfb31-default-token-8s62g podName:0a2a23d3-f913-4a78-bbb4-769b100dfb31 nodeName:}" failed. No retries permitted until 2023-11-28 02:56:39.702180109 +0000 UTC m=+85.536865662 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"default-token-8s62g\" (UniqueName: \"kubernetes.io/secret/0a2a23d3-f913-4a78-bbb4-769b100dfb31-default-token-8s62g\") pod \"nginx\" (UID: \"0a2a23d3-f913-4a78-bbb4-769b100dfb31\") : failed to sync secret cache: timed out waiting for the condition"
	Nov 28 02:59:00 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:59:00.405191    1424 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Nov 28 02:59:00 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:59:00.572656    1424 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-8s62g" (UniqueName: "kubernetes.io/secret/eaaf8ff5-3f4b-4497-a568-6d0fda91c62e-default-token-8s62g") pod "hello-world-app-5f5d8b66bb-slfsj" (UID: "eaaf8ff5-3f4b-4497-a568-6d0fda91c62e")
	Nov 28 02:59:02 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:59:02.058482    1424 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1eda9e42f43be10d45a103f0f1eec9943716d3fdb2e5f40438ea19f686765c6d
	Nov 28 02:59:02 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:59:02.179064    1424 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-mvg6z" (UniqueName: "kubernetes.io/secret/c500ee76-2081-4af8-8caf-a0acff9503ca-minikube-ingress-dns-token-mvg6z") pod "c500ee76-2081-4af8-8caf-a0acff9503ca" (UID: "c500ee76-2081-4af8-8caf-a0acff9503ca")
	Nov 28 02:59:02 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:59:02.185172    1424 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c500ee76-2081-4af8-8caf-a0acff9503ca-minikube-ingress-dns-token-mvg6z" (OuterVolumeSpecName: "minikube-ingress-dns-token-mvg6z") pod "c500ee76-2081-4af8-8caf-a0acff9503ca" (UID: "c500ee76-2081-4af8-8caf-a0acff9503ca"). InnerVolumeSpecName "minikube-ingress-dns-token-mvg6z". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 28 02:59:02 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:59:02.279452    1424 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-mvg6z" (UniqueName: "kubernetes.io/secret/c500ee76-2081-4af8-8caf-a0acff9503ca-minikube-ingress-dns-token-mvg6z") on node "ingress-addon-legacy-648725" DevicePath ""
	Nov 28 02:59:02 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:59:02.285762    1424 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 1eda9e42f43be10d45a103f0f1eec9943716d3fdb2e5f40438ea19f686765c6d
	Nov 28 02:59:02 ingress-addon-legacy-648725 kubelet[1424]: E1128 02:59:02.286451    1424 remote_runtime.go:295] ContainerStatus "1eda9e42f43be10d45a103f0f1eec9943716d3fdb2e5f40438ea19f686765c6d" from runtime service failed: rpc error: code = NotFound desc = could not find container "1eda9e42f43be10d45a103f0f1eec9943716d3fdb2e5f40438ea19f686765c6d": container with ID starting with 1eda9e42f43be10d45a103f0f1eec9943716d3fdb2e5f40438ea19f686765c6d not found: ID does not exist
	Nov 28 02:59:04 ingress-addon-legacy-648725 kubelet[1424]: E1128 02:59:04.202704    1424 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-sv6sz.179baa2148c08985", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-sv6sz", UID:"452ec6e9-c9c5-4cd9-8d42-2956b69020c2", APIVersion:"v1", ResourceVersion:"479", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-648725"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15173be0b8bd985, ext:230028400100, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15173be0b8bd985, ext:230028400100, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-sv6sz.179baa2148c08985" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 28 02:59:04 ingress-addon-legacy-648725 kubelet[1424]: E1128 02:59:04.248540    1424 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-sv6sz.179baa2148c08985", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-sv6sz", UID:"452ec6e9-c9c5-4cd9-8d42-2956b69020c2", APIVersion:"v1", ResourceVersion:"479", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-648725"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc15173be0b8bd985, ext:230028400100, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc15173be0e63a606, ext:230076097125, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-sv6sz.179baa2148c08985" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Nov 28 02:59:07 ingress-addon-legacy-648725 kubelet[1424]: W1128 02:59:07.081106    1424 pod_container_deletor.go:77] Container "b8d89caf58474144755e9bd424880cc22447235f91859c140d114e67bc4bf05a" not found in pod's containers
	Nov 28 02:59:08 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:59:08.399700    1424 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-df7zw" (UniqueName: "kubernetes.io/secret/452ec6e9-c9c5-4cd9-8d42-2956b69020c2-ingress-nginx-token-df7zw") pod "452ec6e9-c9c5-4cd9-8d42-2956b69020c2" (UID: "452ec6e9-c9c5-4cd9-8d42-2956b69020c2")
	Nov 28 02:59:08 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:59:08.399747    1424 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/452ec6e9-c9c5-4cd9-8d42-2956b69020c2-webhook-cert") pod "452ec6e9-c9c5-4cd9-8d42-2956b69020c2" (UID: "452ec6e9-c9c5-4cd9-8d42-2956b69020c2")
	Nov 28 02:59:08 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:59:08.403169    1424 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/452ec6e9-c9c5-4cd9-8d42-2956b69020c2-ingress-nginx-token-df7zw" (OuterVolumeSpecName: "ingress-nginx-token-df7zw") pod "452ec6e9-c9c5-4cd9-8d42-2956b69020c2" (UID: "452ec6e9-c9c5-4cd9-8d42-2956b69020c2"). InnerVolumeSpecName "ingress-nginx-token-df7zw". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 28 02:59:08 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:59:08.404621    1424 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/452ec6e9-c9c5-4cd9-8d42-2956b69020c2-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "452ec6e9-c9c5-4cd9-8d42-2956b69020c2" (UID: "452ec6e9-c9c5-4cd9-8d42-2956b69020c2"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Nov 28 02:59:08 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:59:08.500261    1424 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/452ec6e9-c9c5-4cd9-8d42-2956b69020c2-webhook-cert") on node "ingress-addon-legacy-648725" DevicePath ""
	Nov 28 02:59:08 ingress-addon-legacy-648725 kubelet[1424]: I1128 02:59:08.500296    1424 reconciler.go:319] Volume detached for volume "ingress-nginx-token-df7zw" (UniqueName: "kubernetes.io/secret/452ec6e9-c9c5-4cd9-8d42-2956b69020c2-ingress-nginx-token-df7zw") on node "ingress-addon-legacy-648725" DevicePath ""
	Nov 28 02:59:08 ingress-addon-legacy-648725 kubelet[1424]: W1128 02:59:08.728247    1424 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/452ec6e9-c9c5-4cd9-8d42-2956b69020c2/volumes" does not exist
	
	* 
	* ==> storage-provisioner [ac01131533d10885bb870f56d067ddf5c57af56ffb462f24c3e4df6fa0988385] <==
	* I1128 02:55:33.217984       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 02:55:33.233512       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 02:55:33.233581       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 02:55:33.243092       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 02:55:33.243193       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea588e10-f026-48c4-bf04-f5b94649fd1e", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-648725_28c78293-b934-455b-a89a-e0a30bf6ee44 became leader
	I1128 02:55:33.244171       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-648725_28c78293-b934-455b-a89a-e0a30bf6ee44!
	I1128 02:55:33.345244       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-648725_28c78293-b934-455b-a89a-e0a30bf6ee44!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-648725 -n ingress-addon-legacy-648725
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-648725 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (169.16s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-112998 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-112998 -- exec busybox-5bc68d56bd-cbjtg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-112998 -- exec busybox-5bc68d56bd-cbjtg -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-112998 -- exec busybox-5bc68d56bd-cbjtg -- sh -c "ping -c 1 192.168.39.1": exit status 1 (209.883002ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-cbjtg): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-112998 -- exec busybox-5bc68d56bd-pmx8j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-112998 -- exec busybox-5bc68d56bd-pmx8j -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-112998 -- exec busybox-5bc68d56bd-pmx8j -- sh -c "ping -c 1 192.168.39.1": exit status 1 (197.793239ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-pmx8j): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-112998 -n multinode-112998
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-112998 logs -n 25: (1.402995065s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-463819 ssh -- ls                    | mount-start-2-463819 | jenkins | v1.32.0 | 28 Nov 23 03:03 UTC | 28 Nov 23 03:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-463819 ssh --                       | mount-start-2-463819 | jenkins | v1.32.0 | 28 Nov 23 03:03 UTC | 28 Nov 23 03:03 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-463819                           | mount-start-2-463819 | jenkins | v1.32.0 | 28 Nov 23 03:03 UTC | 28 Nov 23 03:03 UTC |
	| start   | -p mount-start-2-463819                           | mount-start-2-463819 | jenkins | v1.32.0 | 28 Nov 23 03:03 UTC | 28 Nov 23 03:03 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-463819 | jenkins | v1.32.0 | 28 Nov 23 03:03 UTC |                     |
	|         | --profile mount-start-2-463819                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-463819 ssh -- ls                    | mount-start-2-463819 | jenkins | v1.32.0 | 28 Nov 23 03:03 UTC | 28 Nov 23 03:03 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-463819 ssh --                       | mount-start-2-463819 | jenkins | v1.32.0 | 28 Nov 23 03:03 UTC | 28 Nov 23 03:03 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-463819                           | mount-start-2-463819 | jenkins | v1.32.0 | 28 Nov 23 03:03 UTC | 28 Nov 23 03:03 UTC |
	| delete  | -p mount-start-1-439948                           | mount-start-1-439948 | jenkins | v1.32.0 | 28 Nov 23 03:03 UTC | 28 Nov 23 03:03 UTC |
	| start   | -p multinode-112998                               | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:03 UTC | 28 Nov 23 03:05 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-112998 -- apply -f                   | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:05 UTC | 28 Nov 23 03:05 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-112998 -- rollout                    | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:05 UTC | 28 Nov 23 03:06 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-112998 -- get pods -o                | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-112998 -- get pods -o                | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-112998 -- exec                       | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | busybox-5bc68d56bd-cbjtg --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-112998 -- exec                       | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | busybox-5bc68d56bd-pmx8j --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-112998 -- exec                       | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | busybox-5bc68d56bd-cbjtg --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-112998 -- exec                       | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | busybox-5bc68d56bd-pmx8j --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-112998 -- exec                       | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | busybox-5bc68d56bd-cbjtg -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-112998 -- exec                       | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | busybox-5bc68d56bd-pmx8j -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-112998 -- get pods -o                | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-112998 -- exec                       | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | busybox-5bc68d56bd-cbjtg                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-112998 -- exec                       | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC |                     |
	|         | busybox-5bc68d56bd-cbjtg -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-112998 -- exec                       | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | busybox-5bc68d56bd-pmx8j                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-112998 -- exec                       | multinode-112998     | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC |                     |
	|         | busybox-5bc68d56bd-pmx8j -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 03:03:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 03:03:59.478481  353369 out.go:296] Setting OutFile to fd 1 ...
	I1128 03:03:59.478645  353369 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:03:59.478657  353369 out.go:309] Setting ErrFile to fd 2...
	I1128 03:03:59.478665  353369 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:03:59.478882  353369 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 03:03:59.479489  353369 out.go:303] Setting JSON to false
	I1128 03:03:59.480444  353369 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6390,"bootTime":1701134250,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 03:03:59.480514  353369 start.go:138] virtualization: kvm guest
	I1128 03:03:59.482890  353369 out.go:177] * [multinode-112998] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 03:03:59.484979  353369 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 03:03:59.484983  353369 notify.go:220] Checking for updates...
	I1128 03:03:59.486371  353369 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 03:03:59.487859  353369 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:03:59.489340  353369 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 03:03:59.490697  353369 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 03:03:59.492088  353369 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 03:03:59.493574  353369 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 03:03:59.528240  353369 out.go:177] * Using the kvm2 driver based on user configuration
	I1128 03:03:59.529590  353369 start.go:298] selected driver: kvm2
	I1128 03:03:59.529604  353369 start.go:902] validating driver "kvm2" against <nil>
	I1128 03:03:59.529614  353369 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 03:03:59.530283  353369 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:03:59.530356  353369 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 03:03:59.544861  353369 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 03:03:59.544941  353369 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1128 03:03:59.545159  353369 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 03:03:59.545234  353369 cni.go:84] Creating CNI manager for ""
	I1128 03:03:59.545250  353369 cni.go:136] 0 nodes found, recommending kindnet
	I1128 03:03:59.545263  353369 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1128 03:03:59.545275  353369 start_flags.go:323] config:
	{Name:multinode-112998 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-112998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 03:03:59.545400  353369 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:03:59.547144  353369 out.go:177] * Starting control plane node multinode-112998 in cluster multinode-112998
	I1128 03:03:59.548610  353369 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 03:03:59.548649  353369 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 03:03:59.548660  353369 cache.go:56] Caching tarball of preloaded images
	I1128 03:03:59.548741  353369 preload.go:174] Found /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 03:03:59.548755  353369 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 03:03:59.549320  353369 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/config.json ...
	I1128 03:03:59.549348  353369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/config.json: {Name:mk11a998fb705ccb03f58ccb979762e4b39d921e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:03:59.549498  353369 start.go:365] acquiring machines lock for multinode-112998: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 03:03:59.549528  353369 start.go:369] acquired machines lock for "multinode-112998" in 17.35µs
	I1128 03:03:59.549544  353369 start.go:93] Provisioning new machine with config: &{Name:multinode-112998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-112998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 03:03:59.549604  353369 start.go:125] createHost starting for "" (driver="kvm2")
	I1128 03:03:59.551256  353369 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1128 03:03:59.551432  353369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:03:59.551466  353369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:03:59.565315  353369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45923
	I1128 03:03:59.565748  353369 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:03:59.566262  353369 main.go:141] libmachine: Using API Version  1
	I1128 03:03:59.566287  353369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:03:59.566686  353369 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:03:59.566871  353369 main.go:141] libmachine: (multinode-112998) Calling .GetMachineName
	I1128 03:03:59.567007  353369 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:03:59.567167  353369 start.go:159] libmachine.API.Create for "multinode-112998" (driver="kvm2")
	I1128 03:03:59.567207  353369 client.go:168] LocalClient.Create starting
	I1128 03:03:59.567245  353369 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem
	I1128 03:03:59.567287  353369 main.go:141] libmachine: Decoding PEM data...
	I1128 03:03:59.567311  353369 main.go:141] libmachine: Parsing certificate...
	I1128 03:03:59.567392  353369 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem
	I1128 03:03:59.567421  353369 main.go:141] libmachine: Decoding PEM data...
	I1128 03:03:59.567441  353369 main.go:141] libmachine: Parsing certificate...
	I1128 03:03:59.567473  353369 main.go:141] libmachine: Running pre-create checks...
	I1128 03:03:59.567488  353369 main.go:141] libmachine: (multinode-112998) Calling .PreCreateCheck
	I1128 03:03:59.567837  353369 main.go:141] libmachine: (multinode-112998) Calling .GetConfigRaw
	I1128 03:03:59.568219  353369 main.go:141] libmachine: Creating machine...
	I1128 03:03:59.568237  353369 main.go:141] libmachine: (multinode-112998) Calling .Create
	I1128 03:03:59.568367  353369 main.go:141] libmachine: (multinode-112998) Creating KVM machine...
	I1128 03:03:59.569593  353369 main.go:141] libmachine: (multinode-112998) DBG | found existing default KVM network
	I1128 03:03:59.570318  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:03:59.570185  353393 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a50}
	I1128 03:03:59.575396  353369 main.go:141] libmachine: (multinode-112998) DBG | trying to create private KVM network mk-multinode-112998 192.168.39.0/24...
	I1128 03:03:59.647680  353369 main.go:141] libmachine: (multinode-112998) DBG | private KVM network mk-multinode-112998 192.168.39.0/24 created
	I1128 03:03:59.647721  353369 main.go:141] libmachine: (multinode-112998) Setting up store path in /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998 ...
	I1128 03:03:59.647738  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:03:59.647653  353393 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 03:03:59.647840  353369 main.go:141] libmachine: (multinode-112998) Building disk image from file:///home/jenkins/minikube-integration/17671-333305/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso
	I1128 03:03:59.647889  353369 main.go:141] libmachine: (multinode-112998) Downloading /home/jenkins/minikube-integration/17671-333305/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17671-333305/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso...
	I1128 03:03:59.875512  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:03:59.875280  353393 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa...
	I1128 03:04:00.101224  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:00.101032  353393 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/multinode-112998.rawdisk...
	I1128 03:04:00.101272  353369 main.go:141] libmachine: (multinode-112998) DBG | Writing magic tar header
	I1128 03:04:00.101303  353369 main.go:141] libmachine: (multinode-112998) DBG | Writing SSH key tar header
	I1128 03:04:00.101316  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:00.101188  353393 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998 ...
	I1128 03:04:00.101337  353369 main.go:141] libmachine: (multinode-112998) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998
	I1128 03:04:00.101384  353369 main.go:141] libmachine: (multinode-112998) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998 (perms=drwx------)
	I1128 03:04:00.101411  353369 main.go:141] libmachine: (multinode-112998) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305/.minikube/machines
	I1128 03:04:00.101424  353369 main.go:141] libmachine: (multinode-112998) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305/.minikube/machines (perms=drwxr-xr-x)
	I1128 03:04:00.101442  353369 main.go:141] libmachine: (multinode-112998) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305/.minikube (perms=drwxr-xr-x)
	I1128 03:04:00.101459  353369 main.go:141] libmachine: (multinode-112998) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 03:04:00.101472  353369 main.go:141] libmachine: (multinode-112998) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305 (perms=drwxrwxr-x)
	I1128 03:04:00.101489  353369 main.go:141] libmachine: (multinode-112998) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1128 03:04:00.101506  353369 main.go:141] libmachine: (multinode-112998) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1128 03:04:00.101520  353369 main.go:141] libmachine: (multinode-112998) Creating domain...
	I1128 03:04:00.101536  353369 main.go:141] libmachine: (multinode-112998) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305
	I1128 03:04:00.101552  353369 main.go:141] libmachine: (multinode-112998) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1128 03:04:00.101572  353369 main.go:141] libmachine: (multinode-112998) DBG | Checking permissions on dir: /home/jenkins
	I1128 03:04:00.101591  353369 main.go:141] libmachine: (multinode-112998) DBG | Checking permissions on dir: /home
	I1128 03:04:00.101605  353369 main.go:141] libmachine: (multinode-112998) DBG | Skipping /home - not owner
	I1128 03:04:00.102716  353369 main.go:141] libmachine: (multinode-112998) define libvirt domain using xml: 
	I1128 03:04:00.102742  353369 main.go:141] libmachine: (multinode-112998) <domain type='kvm'>
	I1128 03:04:00.102755  353369 main.go:141] libmachine: (multinode-112998)   <name>multinode-112998</name>
	I1128 03:04:00.102765  353369 main.go:141] libmachine: (multinode-112998)   <memory unit='MiB'>2200</memory>
	I1128 03:04:00.102776  353369 main.go:141] libmachine: (multinode-112998)   <vcpu>2</vcpu>
	I1128 03:04:00.102783  353369 main.go:141] libmachine: (multinode-112998)   <features>
	I1128 03:04:00.102793  353369 main.go:141] libmachine: (multinode-112998)     <acpi/>
	I1128 03:04:00.102809  353369 main.go:141] libmachine: (multinode-112998)     <apic/>
	I1128 03:04:00.102830  353369 main.go:141] libmachine: (multinode-112998)     <pae/>
	I1128 03:04:00.102855  353369 main.go:141] libmachine: (multinode-112998)     
	I1128 03:04:00.102881  353369 main.go:141] libmachine: (multinode-112998)   </features>
	I1128 03:04:00.102896  353369 main.go:141] libmachine: (multinode-112998)   <cpu mode='host-passthrough'>
	I1128 03:04:00.102908  353369 main.go:141] libmachine: (multinode-112998)   
	I1128 03:04:00.102921  353369 main.go:141] libmachine: (multinode-112998)   </cpu>
	I1128 03:04:00.102931  353369 main.go:141] libmachine: (multinode-112998)   <os>
	I1128 03:04:00.102949  353369 main.go:141] libmachine: (multinode-112998)     <type>hvm</type>
	I1128 03:04:00.102962  353369 main.go:141] libmachine: (multinode-112998)     <boot dev='cdrom'/>
	I1128 03:04:00.102975  353369 main.go:141] libmachine: (multinode-112998)     <boot dev='hd'/>
	I1128 03:04:00.102987  353369 main.go:141] libmachine: (multinode-112998)     <bootmenu enable='no'/>
	I1128 03:04:00.102998  353369 main.go:141] libmachine: (multinode-112998)   </os>
	I1128 03:04:00.103010  353369 main.go:141] libmachine: (multinode-112998)   <devices>
	I1128 03:04:00.103025  353369 main.go:141] libmachine: (multinode-112998)     <disk type='file' device='cdrom'>
	I1128 03:04:00.103046  353369 main.go:141] libmachine: (multinode-112998)       <source file='/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/boot2docker.iso'/>
	I1128 03:04:00.103060  353369 main.go:141] libmachine: (multinode-112998)       <target dev='hdc' bus='scsi'/>
	I1128 03:04:00.103072  353369 main.go:141] libmachine: (multinode-112998)       <readonly/>
	I1128 03:04:00.103085  353369 main.go:141] libmachine: (multinode-112998)     </disk>
	I1128 03:04:00.103097  353369 main.go:141] libmachine: (multinode-112998)     <disk type='file' device='disk'>
	I1128 03:04:00.103121  353369 main.go:141] libmachine: (multinode-112998)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1128 03:04:00.103145  353369 main.go:141] libmachine: (multinode-112998)       <source file='/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/multinode-112998.rawdisk'/>
	I1128 03:04:00.103160  353369 main.go:141] libmachine: (multinode-112998)       <target dev='hda' bus='virtio'/>
	I1128 03:04:00.103167  353369 main.go:141] libmachine: (multinode-112998)     </disk>
	I1128 03:04:00.103174  353369 main.go:141] libmachine: (multinode-112998)     <interface type='network'>
	I1128 03:04:00.103182  353369 main.go:141] libmachine: (multinode-112998)       <source network='mk-multinode-112998'/>
	I1128 03:04:00.103188  353369 main.go:141] libmachine: (multinode-112998)       <model type='virtio'/>
	I1128 03:04:00.103196  353369 main.go:141] libmachine: (multinode-112998)     </interface>
	I1128 03:04:00.103202  353369 main.go:141] libmachine: (multinode-112998)     <interface type='network'>
	I1128 03:04:00.103210  353369 main.go:141] libmachine: (multinode-112998)       <source network='default'/>
	I1128 03:04:00.103236  353369 main.go:141] libmachine: (multinode-112998)       <model type='virtio'/>
	I1128 03:04:00.103260  353369 main.go:141] libmachine: (multinode-112998)     </interface>
	I1128 03:04:00.103269  353369 main.go:141] libmachine: (multinode-112998)     <serial type='pty'>
	I1128 03:04:00.103277  353369 main.go:141] libmachine: (multinode-112998)       <target port='0'/>
	I1128 03:04:00.103283  353369 main.go:141] libmachine: (multinode-112998)     </serial>
	I1128 03:04:00.103291  353369 main.go:141] libmachine: (multinode-112998)     <console type='pty'>
	I1128 03:04:00.103298  353369 main.go:141] libmachine: (multinode-112998)       <target type='serial' port='0'/>
	I1128 03:04:00.103306  353369 main.go:141] libmachine: (multinode-112998)     </console>
	I1128 03:04:00.103312  353369 main.go:141] libmachine: (multinode-112998)     <rng model='virtio'>
	I1128 03:04:00.103321  353369 main.go:141] libmachine: (multinode-112998)       <backend model='random'>/dev/random</backend>
	I1128 03:04:00.103335  353369 main.go:141] libmachine: (multinode-112998)     </rng>
	I1128 03:04:00.103352  353369 main.go:141] libmachine: (multinode-112998)     
	I1128 03:04:00.103367  353369 main.go:141] libmachine: (multinode-112998)     
	I1128 03:04:00.103379  353369 main.go:141] libmachine: (multinode-112998)   </devices>
	I1128 03:04:00.103389  353369 main.go:141] libmachine: (multinode-112998) </domain>
	I1128 03:04:00.103400  353369 main.go:141] libmachine: (multinode-112998) 
	I1128 03:04:00.107595  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:0d:fa:75 in network default
	I1128 03:04:00.108112  353369 main.go:141] libmachine: (multinode-112998) Ensuring networks are active...
	I1128 03:04:00.108136  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:00.108798  353369 main.go:141] libmachine: (multinode-112998) Ensuring network default is active
	I1128 03:04:00.109066  353369 main.go:141] libmachine: (multinode-112998) Ensuring network mk-multinode-112998 is active
	I1128 03:04:00.109539  353369 main.go:141] libmachine: (multinode-112998) Getting domain xml...
	I1128 03:04:00.110221  353369 main.go:141] libmachine: (multinode-112998) Creating domain...
	I1128 03:04:01.330472  353369 main.go:141] libmachine: (multinode-112998) Waiting to get IP...
	I1128 03:04:01.331333  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:01.331654  353369 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:04:01.331757  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:01.331657  353393 retry.go:31] will retry after 310.730048ms: waiting for machine to come up
	I1128 03:04:01.644206  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:01.644728  353369 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:04:01.644758  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:01.644673  353393 retry.go:31] will retry after 363.041236ms: waiting for machine to come up
	I1128 03:04:02.009349  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:02.009846  353369 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:04:02.009881  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:02.009807  353393 retry.go:31] will retry after 336.506068ms: waiting for machine to come up
	I1128 03:04:02.348315  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:02.348710  353369 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:04:02.348741  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:02.348620  353393 retry.go:31] will retry after 569.983334ms: waiting for machine to come up
	I1128 03:04:02.920522  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:02.921067  353369 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:04:02.921099  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:02.921001  353393 retry.go:31] will retry after 749.668316ms: waiting for machine to come up
	I1128 03:04:03.672799  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:03.673179  353369 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:04:03.673203  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:03.673134  353393 retry.go:31] will retry after 813.557751ms: waiting for machine to come up
	I1128 03:04:04.488152  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:04.488612  353369 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:04:04.488648  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:04.488552  353393 retry.go:31] will retry after 1.128318103s: waiting for machine to come up
	I1128 03:04:05.618535  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:05.618961  353369 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:04:05.618992  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:05.618899  353393 retry.go:31] will retry after 1.124768516s: waiting for machine to come up
	I1128 03:04:06.745376  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:06.745727  353369 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:04:06.745750  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:06.745679  353393 retry.go:31] will retry after 1.565915907s: waiting for machine to come up
	I1128 03:04:08.312829  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:08.313261  353369 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:04:08.313294  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:08.313200  353393 retry.go:31] will retry after 1.687325991s: waiting for machine to come up
	I1128 03:04:10.002203  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:10.002608  353369 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:04:10.002631  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:10.002561  353393 retry.go:31] will retry after 2.799626367s: waiting for machine to come up
	I1128 03:04:12.804193  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:12.804622  353369 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:04:12.804654  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:12.804557  353393 retry.go:31] will retry after 3.087987002s: waiting for machine to come up
	I1128 03:04:15.894305  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:15.894730  353369 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:04:15.894754  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:15.894674  353393 retry.go:31] will retry after 4.113569717s: waiting for machine to come up
	I1128 03:04:20.009583  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:20.009970  353369 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:04:20.010002  353369 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:04:20.009890  353393 retry.go:31] will retry after 4.512185745s: waiting for machine to come up
	I1128 03:04:24.527539  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:24.527973  353369 main.go:141] libmachine: (multinode-112998) Found IP for machine: 192.168.39.73
	I1128 03:04:24.527998  353369 main.go:141] libmachine: (multinode-112998) Reserving static IP address...
	I1128 03:04:24.528013  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has current primary IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:24.528527  353369 main.go:141] libmachine: (multinode-112998) DBG | unable to find host DHCP lease matching {name: "multinode-112998", mac: "52:54:00:78:69:e6", ip: "192.168.39.73"} in network mk-multinode-112998
	I1128 03:04:24.599134  353369 main.go:141] libmachine: (multinode-112998) DBG | Getting to WaitForSSH function...
	I1128 03:04:24.599181  353369 main.go:141] libmachine: (multinode-112998) Reserved static IP address: 192.168.39.73
	I1128 03:04:24.599203  353369 main.go:141] libmachine: (multinode-112998) Waiting for SSH to be available...
	I1128 03:04:24.601777  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:24.602177  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:minikube Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:24.602221  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:24.602352  353369 main.go:141] libmachine: (multinode-112998) DBG | Using SSH client type: external
	I1128 03:04:24.602379  353369 main.go:141] libmachine: (multinode-112998) DBG | Using SSH private key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa (-rw-------)
	I1128 03:04:24.602427  353369 main.go:141] libmachine: (multinode-112998) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 03:04:24.602456  353369 main.go:141] libmachine: (multinode-112998) DBG | About to run SSH command:
	I1128 03:04:24.602474  353369 main.go:141] libmachine: (multinode-112998) DBG | exit 0
	I1128 03:04:24.696840  353369 main.go:141] libmachine: (multinode-112998) DBG | SSH cmd err, output: <nil>: 
	I1128 03:04:24.697186  353369 main.go:141] libmachine: (multinode-112998) KVM machine creation complete!
	I1128 03:04:24.697548  353369 main.go:141] libmachine: (multinode-112998) Calling .GetConfigRaw
	I1128 03:04:24.698151  353369 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:04:24.698338  353369 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:04:24.698483  353369 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1128 03:04:24.698500  353369 main.go:141] libmachine: (multinode-112998) Calling .GetState
	I1128 03:04:24.699651  353369 main.go:141] libmachine: Detecting operating system of created instance...
	I1128 03:04:24.699672  353369 main.go:141] libmachine: Waiting for SSH to be available...
	I1128 03:04:24.699681  353369 main.go:141] libmachine: Getting to WaitForSSH function...
	I1128 03:04:24.699692  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:04:24.701825  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:24.702213  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:24.702234  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:24.702379  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:04:24.702565  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:24.702767  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:24.702928  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:04:24.703093  353369 main.go:141] libmachine: Using SSH client type: native
	I1128 03:04:24.703471  353369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I1128 03:04:24.703486  353369 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1128 03:04:24.828040  353369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 03:04:24.828063  353369 main.go:141] libmachine: Detecting the provisioner...
	I1128 03:04:24.828072  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:04:24.830678  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:24.831101  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:24.831131  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:24.831304  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:04:24.831500  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:24.831702  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:24.831845  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:04:24.832009  353369 main.go:141] libmachine: Using SSH client type: native
	I1128 03:04:24.832364  353369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I1128 03:04:24.832389  353369 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1128 03:04:24.957700  353369 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g21ec34a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1128 03:04:24.957820  353369 main.go:141] libmachine: found compatible host: buildroot
	I1128 03:04:24.957836  353369 main.go:141] libmachine: Provisioning with buildroot...
	I1128 03:04:24.957853  353369 main.go:141] libmachine: (multinode-112998) Calling .GetMachineName
	I1128 03:04:24.958138  353369 buildroot.go:166] provisioning hostname "multinode-112998"
	I1128 03:04:24.958177  353369 main.go:141] libmachine: (multinode-112998) Calling .GetMachineName
	I1128 03:04:24.958442  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:04:24.961069  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:24.961461  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:24.961495  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:24.961721  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:04:24.961911  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:24.962091  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:24.962234  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:04:24.962434  353369 main.go:141] libmachine: Using SSH client type: native
	I1128 03:04:24.962753  353369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I1128 03:04:24.962766  353369 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-112998 && echo "multinode-112998" | sudo tee /etc/hostname
	I1128 03:04:25.101493  353369 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-112998
	
	I1128 03:04:25.101523  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:04:25.104047  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:25.104337  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:25.104366  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:25.104569  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:04:25.104755  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:25.104969  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:25.105085  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:04:25.105224  353369 main.go:141] libmachine: Using SSH client type: native
	I1128 03:04:25.105705  353369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I1128 03:04:25.105734  353369 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-112998' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-112998/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-112998' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 03:04:25.241296  353369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 03:04:25.241328  353369 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 03:04:25.241369  353369 buildroot.go:174] setting up certificates
	I1128 03:04:25.241378  353369 provision.go:83] configureAuth start
	I1128 03:04:25.241390  353369 main.go:141] libmachine: (multinode-112998) Calling .GetMachineName
	I1128 03:04:25.241681  353369 main.go:141] libmachine: (multinode-112998) Calling .GetIP
	I1128 03:04:25.244466  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:25.244826  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:25.244854  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:25.245038  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:04:25.247068  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:25.247436  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:25.247466  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:25.247579  353369 provision.go:138] copyHostCerts
	I1128 03:04:25.247617  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 03:04:25.247658  353369 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 03:04:25.247667  353369 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 03:04:25.247723  353369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 03:04:25.247816  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 03:04:25.247838  353369 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 03:04:25.247845  353369 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 03:04:25.247865  353369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 03:04:25.247907  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 03:04:25.247923  353369 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 03:04:25.247929  353369 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 03:04:25.247947  353369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 03:04:25.248030  353369 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.multinode-112998 san=[192.168.39.73 192.168.39.73 localhost 127.0.0.1 minikube multinode-112998]
	I1128 03:04:25.381291  353369 provision.go:172] copyRemoteCerts
	I1128 03:04:25.381368  353369 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 03:04:25.381397  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:04:25.384131  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:25.384450  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:25.384483  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:25.384708  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:04:25.384959  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:25.385144  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:04:25.385289  353369 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:04:25.477985  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1128 03:04:25.478058  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 03:04:25.504920  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1128 03:04:25.504997  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1128 03:04:25.527317  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1128 03:04:25.527394  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 03:04:25.549714  353369 provision.go:86] duration metric: configureAuth took 308.319937ms
	I1128 03:04:25.549746  353369 buildroot.go:189] setting minikube options for container-runtime
	I1128 03:04:25.549948  353369 config.go:182] Loaded profile config "multinode-112998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:04:25.550053  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:04:25.553046  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:25.553361  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:25.553387  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:25.553530  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:04:25.553741  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:25.553914  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:25.554100  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:04:25.554276  353369 main.go:141] libmachine: Using SSH client type: native
	I1128 03:04:25.554589  353369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I1128 03:04:25.554604  353369 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 03:04:25.884455  353369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 03:04:25.884486  353369 main.go:141] libmachine: Checking connection to Docker...
	I1128 03:04:25.884499  353369 main.go:141] libmachine: (multinode-112998) Calling .GetURL
	I1128 03:04:25.885934  353369 main.go:141] libmachine: (multinode-112998) DBG | Using libvirt version 6000000
	I1128 03:04:25.888395  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:25.888794  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:25.888826  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:25.888987  353369 main.go:141] libmachine: Docker is up and running!
	I1128 03:04:25.889011  353369 main.go:141] libmachine: Reticulating splines...
	I1128 03:04:25.889023  353369 client.go:171] LocalClient.Create took 26.321802784s
	I1128 03:04:25.889052  353369 start.go:167] duration metric: libmachine.API.Create for "multinode-112998" took 26.321885194s
	I1128 03:04:25.889067  353369 start.go:300] post-start starting for "multinode-112998" (driver="kvm2")
	I1128 03:04:25.889082  353369 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 03:04:25.889106  353369 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:04:25.889378  353369 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 03:04:25.889414  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:04:25.891657  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:25.892035  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:25.892068  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:25.892203  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:04:25.892430  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:25.892587  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:04:25.892747  353369 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:04:25.985931  353369 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 03:04:25.989786  353369 command_runner.go:130] > NAME=Buildroot
	I1128 03:04:25.989807  353369 command_runner.go:130] > VERSION=2021.02.12-1-g21ec34a-dirty
	I1128 03:04:25.989812  353369 command_runner.go:130] > ID=buildroot
	I1128 03:04:25.989817  353369 command_runner.go:130] > VERSION_ID=2021.02.12
	I1128 03:04:25.989822  353369 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1128 03:04:25.990016  353369 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 03:04:25.990032  353369 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 03:04:25.990090  353369 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 03:04:25.990161  353369 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 03:04:25.990171  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> /etc/ssl/certs/3405152.pem
	I1128 03:04:25.990264  353369 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 03:04:25.998085  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 03:04:26.020151  353369 start.go:303] post-start completed in 131.069891ms
	I1128 03:04:26.020195  353369 main.go:141] libmachine: (multinode-112998) Calling .GetConfigRaw
	I1128 03:04:26.020794  353369 main.go:141] libmachine: (multinode-112998) Calling .GetIP
	I1128 03:04:26.023620  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:26.023947  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:26.023981  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:26.024204  353369 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/config.json ...
	I1128 03:04:26.024371  353369 start.go:128] duration metric: createHost completed in 26.474757174s
	I1128 03:04:26.024392  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:04:26.026251  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:26.026563  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:26.026593  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:26.026737  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:04:26.026922  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:26.027056  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:26.027190  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:04:26.027383  353369 main.go:141] libmachine: Using SSH client type: native
	I1128 03:04:26.027818  353369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I1128 03:04:26.027835  353369 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 03:04:26.153679  353369 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701140666.138407464
	
	I1128 03:04:26.153709  353369 fix.go:206] guest clock: 1701140666.138407464
	I1128 03:04:26.153719  353369 fix.go:219] Guest: 2023-11-28 03:04:26.138407464 +0000 UTC Remote: 2023-11-28 03:04:26.024381508 +0000 UTC m=+26.595126987 (delta=114.025956ms)
	I1128 03:04:26.153747  353369 fix.go:190] guest clock delta is within tolerance: 114.025956ms
	I1128 03:04:26.153756  353369 start.go:83] releasing machines lock for "multinode-112998", held for 26.604219535s
	I1128 03:04:26.153779  353369 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:04:26.154073  353369 main.go:141] libmachine: (multinode-112998) Calling .GetIP
	I1128 03:04:26.156554  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:26.156937  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:26.156974  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:26.157202  353369 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:04:26.157699  353369 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:04:26.157884  353369 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:04:26.157961  353369 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 03:04:26.158017  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:04:26.158138  353369 ssh_runner.go:195] Run: cat /version.json
	I1128 03:04:26.158165  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:04:26.160614  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:26.160781  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:26.160946  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:26.160973  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:26.161110  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:04:26.161129  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:26.161162  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:26.161314  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:04:26.161328  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:26.161452  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:04:26.161512  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:26.161622  353369 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:04:26.161681  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:04:26.161808  353369 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:04:26.249459  353369 command_runner.go:130] > {"iso_version": "v1.32.1-1700142131-17634", "kicbase_version": "v0.0.42-1699485386-17565", "minikube_version": "v1.32.0", "commit": "6532cab52e164d1138ecb8469e77a57a00b45825"}
	I1128 03:04:26.249661  353369 ssh_runner.go:195] Run: systemctl --version
	I1128 03:04:26.276731  353369 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1128 03:04:26.276774  353369 command_runner.go:130] > systemd 247 (247)
	I1128 03:04:26.276806  353369 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1128 03:04:26.276890  353369 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 03:04:26.432175  353369 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 03:04:26.439095  353369 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1128 03:04:26.439414  353369 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 03:04:26.439488  353369 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 03:04:26.453784  353369 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1128 03:04:26.453842  353369 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 03:04:26.453851  353369 start.go:472] detecting cgroup driver to use...
	I1128 03:04:26.453904  353369 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 03:04:26.467802  353369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 03:04:26.479893  353369 docker.go:203] disabling cri-docker service (if available) ...
	I1128 03:04:26.479954  353369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 03:04:26.491934  353369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 03:04:26.504387  353369 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 03:04:26.610875  353369 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1128 03:04:26.610980  353369 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 03:04:26.728462  353369 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1128 03:04:26.728506  353369 docker.go:219] disabling docker service ...
	I1128 03:04:26.728572  353369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 03:04:26.742453  353369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 03:04:26.753056  353369 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1128 03:04:26.753998  353369 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 03:04:26.860243  353369 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1128 03:04:26.860337  353369 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 03:04:26.967501  353369 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1128 03:04:26.967535  353369 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1128 03:04:26.967613  353369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 03:04:26.980240  353369 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 03:04:26.996979  353369 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1128 03:04:26.997060  353369 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 03:04:26.997138  353369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:04:27.005986  353369 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 03:04:27.006059  353369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:04:27.015754  353369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:04:27.025446  353369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:04:27.034999  353369 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 03:04:27.044973  353369 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 03:04:27.053837  353369 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 03:04:27.053874  353369 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 03:04:27.053908  353369 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 03:04:27.066984  353369 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 03:04:27.076764  353369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 03:04:27.188711  353369 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 03:04:27.357430  353369 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 03:04:27.357505  353369 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 03:04:27.362248  353369 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1128 03:04:27.362272  353369 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1128 03:04:27.362279  353369 command_runner.go:130] > Device: 16h/22d	Inode: 781         Links: 1
	I1128 03:04:27.362286  353369 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 03:04:27.362291  353369 command_runner.go:130] > Access: 2023-11-28 03:04:27.329585994 +0000
	I1128 03:04:27.362297  353369 command_runner.go:130] > Modify: 2023-11-28 03:04:27.329585994 +0000
	I1128 03:04:27.362302  353369 command_runner.go:130] > Change: 2023-11-28 03:04:27.329585994 +0000
	I1128 03:04:27.362305  353369 command_runner.go:130] >  Birth: -
	I1128 03:04:27.362321  353369 start.go:540] Will wait 60s for crictl version
	I1128 03:04:27.362398  353369 ssh_runner.go:195] Run: which crictl
	I1128 03:04:27.366407  353369 command_runner.go:130] > /usr/bin/crictl
	I1128 03:04:27.366468  353369 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 03:04:27.403099  353369 command_runner.go:130] > Version:  0.1.0
	I1128 03:04:27.403123  353369 command_runner.go:130] > RuntimeName:  cri-o
	I1128 03:04:27.403127  353369 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1128 03:04:27.403133  353369 command_runner.go:130] > RuntimeApiVersion:  v1
	I1128 03:04:27.403346  353369 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 03:04:27.403420  353369 ssh_runner.go:195] Run: crio --version
	I1128 03:04:27.450100  353369 command_runner.go:130] > crio version 1.24.1
	I1128 03:04:27.450129  353369 command_runner.go:130] > Version:          1.24.1
	I1128 03:04:27.450140  353369 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 03:04:27.450148  353369 command_runner.go:130] > GitTreeState:     dirty
	I1128 03:04:27.450161  353369 command_runner.go:130] > BuildDate:        2023-11-16T19:10:07Z
	I1128 03:04:27.450167  353369 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 03:04:27.450171  353369 command_runner.go:130] > Compiler:         gc
	I1128 03:04:27.450177  353369 command_runner.go:130] > Platform:         linux/amd64
	I1128 03:04:27.450186  353369 command_runner.go:130] > Linkmode:         dynamic
	I1128 03:04:27.450195  353369 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 03:04:27.450202  353369 command_runner.go:130] > SeccompEnabled:   true
	I1128 03:04:27.450206  353369 command_runner.go:130] > AppArmorEnabled:  false
	I1128 03:04:27.451514  353369 ssh_runner.go:195] Run: crio --version
	I1128 03:04:27.497435  353369 command_runner.go:130] > crio version 1.24.1
	I1128 03:04:27.497461  353369 command_runner.go:130] > Version:          1.24.1
	I1128 03:04:27.497468  353369 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 03:04:27.497472  353369 command_runner.go:130] > GitTreeState:     dirty
	I1128 03:04:27.497478  353369 command_runner.go:130] > BuildDate:        2023-11-16T19:10:07Z
	I1128 03:04:27.497483  353369 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 03:04:27.497487  353369 command_runner.go:130] > Compiler:         gc
	I1128 03:04:27.497491  353369 command_runner.go:130] > Platform:         linux/amd64
	I1128 03:04:27.497503  353369 command_runner.go:130] > Linkmode:         dynamic
	I1128 03:04:27.497510  353369 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 03:04:27.497515  353369 command_runner.go:130] > SeccompEnabled:   true
	I1128 03:04:27.497518  353369 command_runner.go:130] > AppArmorEnabled:  false
	I1128 03:04:27.500832  353369 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 03:04:27.502235  353369 main.go:141] libmachine: (multinode-112998) Calling .GetIP
	I1128 03:04:27.504643  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:27.504946  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:27.504973  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:27.505306  353369 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 03:04:27.509563  353369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 03:04:27.523408  353369 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 03:04:27.523463  353369 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 03:04:27.558208  353369 command_runner.go:130] > {
	I1128 03:04:27.558230  353369 command_runner.go:130] >   "images": [
	I1128 03:04:27.558234  353369 command_runner.go:130] >   ]
	I1128 03:04:27.558237  353369 command_runner.go:130] > }
	I1128 03:04:27.559420  353369 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 03:04:27.559505  353369 ssh_runner.go:195] Run: which lz4
	I1128 03:04:27.563019  353369 command_runner.go:130] > /usr/bin/lz4
	I1128 03:04:27.563080  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1128 03:04:27.563149  353369 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 03:04:27.566941  353369 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 03:04:27.566978  353369 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 03:04:27.566993  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 03:04:29.237640  353369 crio.go:444] Took 1.674507 seconds to copy over tarball
	I1128 03:04:29.237726  353369 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 03:04:32.071509  353369 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.833737753s)
	I1128 03:04:32.071533  353369 crio.go:451] Took 2.833859 seconds to extract the tarball
	I1128 03:04:32.071542  353369 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 03:04:32.110952  353369 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 03:04:32.183770  353369 command_runner.go:130] > {
	I1128 03:04:32.183793  353369 command_runner.go:130] >   "images": [
	I1128 03:04:32.183797  353369 command_runner.go:130] >     {
	I1128 03:04:32.183805  353369 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1128 03:04:32.183810  353369 command_runner.go:130] >       "repoTags": [
	I1128 03:04:32.183816  353369 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1128 03:04:32.183819  353369 command_runner.go:130] >       ],
	I1128 03:04:32.183827  353369 command_runner.go:130] >       "repoDigests": [
	I1128 03:04:32.183840  353369 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1128 03:04:32.183851  353369 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1128 03:04:32.183856  353369 command_runner.go:130] >       ],
	I1128 03:04:32.183865  353369 command_runner.go:130] >       "size": "65258016",
	I1128 03:04:32.183872  353369 command_runner.go:130] >       "uid": null,
	I1128 03:04:32.183879  353369 command_runner.go:130] >       "username": "",
	I1128 03:04:32.183891  353369 command_runner.go:130] >       "spec": null,
	I1128 03:04:32.183895  353369 command_runner.go:130] >       "pinned": false
	I1128 03:04:32.183902  353369 command_runner.go:130] >     },
	I1128 03:04:32.183906  353369 command_runner.go:130] >     {
	I1128 03:04:32.183914  353369 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1128 03:04:32.183920  353369 command_runner.go:130] >       "repoTags": [
	I1128 03:04:32.183925  353369 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1128 03:04:32.183929  353369 command_runner.go:130] >       ],
	I1128 03:04:32.183935  353369 command_runner.go:130] >       "repoDigests": [
	I1128 03:04:32.183948  353369 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1128 03:04:32.183961  353369 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1128 03:04:32.183972  353369 command_runner.go:130] >       ],
	I1128 03:04:32.183982  353369 command_runner.go:130] >       "size": "31470524",
	I1128 03:04:32.183991  353369 command_runner.go:130] >       "uid": null,
	I1128 03:04:32.183997  353369 command_runner.go:130] >       "username": "",
	I1128 03:04:32.184004  353369 command_runner.go:130] >       "spec": null,
	I1128 03:04:32.184010  353369 command_runner.go:130] >       "pinned": false
	I1128 03:04:32.184014  353369 command_runner.go:130] >     },
	I1128 03:04:32.184017  353369 command_runner.go:130] >     {
	I1128 03:04:32.184023  353369 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1128 03:04:32.184028  353369 command_runner.go:130] >       "repoTags": [
	I1128 03:04:32.184033  353369 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1128 03:04:32.184044  353369 command_runner.go:130] >       ],
	I1128 03:04:32.184051  353369 command_runner.go:130] >       "repoDigests": [
	I1128 03:04:32.184060  353369 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1128 03:04:32.184074  353369 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1128 03:04:32.184082  353369 command_runner.go:130] >       ],
	I1128 03:04:32.184089  353369 command_runner.go:130] >       "size": "53621675",
	I1128 03:04:32.184097  353369 command_runner.go:130] >       "uid": null,
	I1128 03:04:32.184105  353369 command_runner.go:130] >       "username": "",
	I1128 03:04:32.184113  353369 command_runner.go:130] >       "spec": null,
	I1128 03:04:32.184123  353369 command_runner.go:130] >       "pinned": false
	I1128 03:04:32.184129  353369 command_runner.go:130] >     },
	I1128 03:04:32.184136  353369 command_runner.go:130] >     {
	I1128 03:04:32.184142  353369 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1128 03:04:32.184148  353369 command_runner.go:130] >       "repoTags": [
	I1128 03:04:32.184154  353369 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1128 03:04:32.184158  353369 command_runner.go:130] >       ],
	I1128 03:04:32.184163  353369 command_runner.go:130] >       "repoDigests": [
	I1128 03:04:32.184172  353369 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1128 03:04:32.184191  353369 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1128 03:04:32.184207  353369 command_runner.go:130] >       ],
	I1128 03:04:32.184218  353369 command_runner.go:130] >       "size": "295456551",
	I1128 03:04:32.184225  353369 command_runner.go:130] >       "uid": {
	I1128 03:04:32.184234  353369 command_runner.go:130] >         "value": "0"
	I1128 03:04:32.184243  353369 command_runner.go:130] >       },
	I1128 03:04:32.184250  353369 command_runner.go:130] >       "username": "",
	I1128 03:04:32.184254  353369 command_runner.go:130] >       "spec": null,
	I1128 03:04:32.184263  353369 command_runner.go:130] >       "pinned": false
	I1128 03:04:32.184273  353369 command_runner.go:130] >     },
	I1128 03:04:32.184282  353369 command_runner.go:130] >     {
	I1128 03:04:32.184297  353369 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1128 03:04:32.184307  353369 command_runner.go:130] >       "repoTags": [
	I1128 03:04:32.184319  353369 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1128 03:04:32.184328  353369 command_runner.go:130] >       ],
	I1128 03:04:32.184336  353369 command_runner.go:130] >       "repoDigests": [
	I1128 03:04:32.184345  353369 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1128 03:04:32.184361  353369 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1128 03:04:32.184375  353369 command_runner.go:130] >       ],
	I1128 03:04:32.184386  353369 command_runner.go:130] >       "size": "127226832",
	I1128 03:04:32.184396  353369 command_runner.go:130] >       "uid": {
	I1128 03:04:32.184406  353369 command_runner.go:130] >         "value": "0"
	I1128 03:04:32.184415  353369 command_runner.go:130] >       },
	I1128 03:04:32.184425  353369 command_runner.go:130] >       "username": "",
	I1128 03:04:32.184435  353369 command_runner.go:130] >       "spec": null,
	I1128 03:04:32.184443  353369 command_runner.go:130] >       "pinned": false
	I1128 03:04:32.184449  353369 command_runner.go:130] >     },
	I1128 03:04:32.184455  353369 command_runner.go:130] >     {
	I1128 03:04:32.184469  353369 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1128 03:04:32.184479  353369 command_runner.go:130] >       "repoTags": [
	I1128 03:04:32.184488  353369 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1128 03:04:32.184498  353369 command_runner.go:130] >       ],
	I1128 03:04:32.184508  353369 command_runner.go:130] >       "repoDigests": [
	I1128 03:04:32.184523  353369 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1128 03:04:32.184538  353369 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1128 03:04:32.184546  353369 command_runner.go:130] >       ],
	I1128 03:04:32.184554  353369 command_runner.go:130] >       "size": "123261750",
	I1128 03:04:32.184565  353369 command_runner.go:130] >       "uid": {
	I1128 03:04:32.184580  353369 command_runner.go:130] >         "value": "0"
	I1128 03:04:32.184587  353369 command_runner.go:130] >       },
	I1128 03:04:32.184597  353369 command_runner.go:130] >       "username": "",
	I1128 03:04:32.184607  353369 command_runner.go:130] >       "spec": null,
	I1128 03:04:32.184616  353369 command_runner.go:130] >       "pinned": false
	I1128 03:04:32.184625  353369 command_runner.go:130] >     },
	I1128 03:04:32.184634  353369 command_runner.go:130] >     {
	I1128 03:04:32.184647  353369 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1128 03:04:32.184654  353369 command_runner.go:130] >       "repoTags": [
	I1128 03:04:32.184661  353369 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1128 03:04:32.184671  353369 command_runner.go:130] >       ],
	I1128 03:04:32.184681  353369 command_runner.go:130] >       "repoDigests": [
	I1128 03:04:32.184698  353369 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1128 03:04:32.184717  353369 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1128 03:04:32.184723  353369 command_runner.go:130] >       ],
	I1128 03:04:32.184733  353369 command_runner.go:130] >       "size": "74749335",
	I1128 03:04:32.184746  353369 command_runner.go:130] >       "uid": null,
	I1128 03:04:32.184753  353369 command_runner.go:130] >       "username": "",
	I1128 03:04:32.184758  353369 command_runner.go:130] >       "spec": null,
	I1128 03:04:32.184769  353369 command_runner.go:130] >       "pinned": false
	I1128 03:04:32.184779  353369 command_runner.go:130] >     },
	I1128 03:04:32.184786  353369 command_runner.go:130] >     {
	I1128 03:04:32.184800  353369 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1128 03:04:32.184809  353369 command_runner.go:130] >       "repoTags": [
	I1128 03:04:32.184821  353369 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1128 03:04:32.184829  353369 command_runner.go:130] >       ],
	I1128 03:04:32.184839  353369 command_runner.go:130] >       "repoDigests": [
	I1128 03:04:32.184872  353369 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1128 03:04:32.184900  353369 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1128 03:04:32.184907  353369 command_runner.go:130] >       ],
	I1128 03:04:32.184915  353369 command_runner.go:130] >       "size": "61551410",
	I1128 03:04:32.184925  353369 command_runner.go:130] >       "uid": {
	I1128 03:04:32.184935  353369 command_runner.go:130] >         "value": "0"
	I1128 03:04:32.184944  353369 command_runner.go:130] >       },
	I1128 03:04:32.184957  353369 command_runner.go:130] >       "username": "",
	I1128 03:04:32.184966  353369 command_runner.go:130] >       "spec": null,
	I1128 03:04:32.184974  353369 command_runner.go:130] >       "pinned": false
	I1128 03:04:32.184983  353369 command_runner.go:130] >     },
	I1128 03:04:32.184992  353369 command_runner.go:130] >     {
	I1128 03:04:32.185003  353369 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1128 03:04:32.185014  353369 command_runner.go:130] >       "repoTags": [
	I1128 03:04:32.185025  353369 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1128 03:04:32.185031  353369 command_runner.go:130] >       ],
	I1128 03:04:32.185041  353369 command_runner.go:130] >       "repoDigests": [
	I1128 03:04:32.185053  353369 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1128 03:04:32.185066  353369 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1128 03:04:32.185072  353369 command_runner.go:130] >       ],
	I1128 03:04:32.185078  353369 command_runner.go:130] >       "size": "750414",
	I1128 03:04:32.185088  353369 command_runner.go:130] >       "uid": {
	I1128 03:04:32.185095  353369 command_runner.go:130] >         "value": "65535"
	I1128 03:04:32.185104  353369 command_runner.go:130] >       },
	I1128 03:04:32.185111  353369 command_runner.go:130] >       "username": "",
	I1128 03:04:32.185125  353369 command_runner.go:130] >       "spec": null,
	I1128 03:04:32.185135  353369 command_runner.go:130] >       "pinned": false
	I1128 03:04:32.185144  353369 command_runner.go:130] >     }
	I1128 03:04:32.185152  353369 command_runner.go:130] >   ]
	I1128 03:04:32.185158  353369 command_runner.go:130] > }
	I1128 03:04:32.185295  353369 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 03:04:32.185310  353369 cache_images.go:84] Images are preloaded, skipping loading
	I1128 03:04:32.185386  353369 ssh_runner.go:195] Run: crio config
	I1128 03:04:32.238059  353369 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1128 03:04:32.238095  353369 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1128 03:04:32.238106  353369 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1128 03:04:32.238111  353369 command_runner.go:130] > #
	I1128 03:04:32.238123  353369 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1128 03:04:32.238133  353369 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1128 03:04:32.238142  353369 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1128 03:04:32.238158  353369 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1128 03:04:32.238168  353369 command_runner.go:130] > # reload'.
	I1128 03:04:32.238180  353369 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1128 03:04:32.238192  353369 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1128 03:04:32.238204  353369 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1128 03:04:32.238217  353369 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1128 03:04:32.238223  353369 command_runner.go:130] > [crio]
	I1128 03:04:32.238232  353369 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1128 03:04:32.238243  353369 command_runner.go:130] > # containers images, in this directory.
	I1128 03:04:32.238255  353369 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1128 03:04:32.238282  353369 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1128 03:04:32.238916  353369 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1128 03:04:32.238937  353369 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1128 03:04:32.238946  353369 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1128 03:04:32.239218  353369 command_runner.go:130] > storage_driver = "overlay"
	I1128 03:04:32.239239  353369 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1128 03:04:32.239249  353369 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1128 03:04:32.239259  353369 command_runner.go:130] > storage_option = [
	I1128 03:04:32.239450  353369 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1128 03:04:32.239465  353369 command_runner.go:130] > ]
	I1128 03:04:32.239490  353369 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1128 03:04:32.239503  353369 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1128 03:04:32.239820  353369 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1128 03:04:32.239835  353369 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1128 03:04:32.239845  353369 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1128 03:04:32.239853  353369 command_runner.go:130] > # always happen on a node reboot
	I1128 03:04:32.240347  353369 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1128 03:04:32.240363  353369 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1128 03:04:32.240372  353369 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1128 03:04:32.240396  353369 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1128 03:04:32.240785  353369 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1128 03:04:32.240807  353369 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1128 03:04:32.240819  353369 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1128 03:04:32.241339  353369 command_runner.go:130] > # internal_wipe = true
	I1128 03:04:32.241356  353369 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1128 03:04:32.241366  353369 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1128 03:04:32.241378  353369 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1128 03:04:32.241840  353369 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1128 03:04:32.241858  353369 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1128 03:04:32.241869  353369 command_runner.go:130] > [crio.api]
	I1128 03:04:32.241887  353369 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1128 03:04:32.242237  353369 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1128 03:04:32.242253  353369 command_runner.go:130] > # IP address on which the stream server will listen.
	I1128 03:04:32.242775  353369 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1128 03:04:32.242793  353369 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1128 03:04:32.242806  353369 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1128 03:04:32.242833  353369 command_runner.go:130] > # stream_port = "0"
	I1128 03:04:32.242858  353369 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1128 03:04:32.242866  353369 command_runner.go:130] > # stream_enable_tls = false
	I1128 03:04:32.242876  353369 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1128 03:04:32.243005  353369 command_runner.go:130] > # stream_idle_timeout = ""
	I1128 03:04:32.243026  353369 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1128 03:04:32.243035  353369 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1128 03:04:32.243042  353369 command_runner.go:130] > # minutes.
	I1128 03:04:32.243050  353369 command_runner.go:130] > # stream_tls_cert = ""
	I1128 03:04:32.243061  353369 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1128 03:04:32.243074  353369 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1128 03:04:32.243101  353369 command_runner.go:130] > # stream_tls_key = ""
	I1128 03:04:32.243115  353369 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1128 03:04:32.243128  353369 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1128 03:04:32.243139  353369 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1128 03:04:32.243146  353369 command_runner.go:130] > # stream_tls_ca = ""
	I1128 03:04:32.243159  353369 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 03:04:32.243170  353369 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1128 03:04:32.243181  353369 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 03:04:32.243197  353369 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1128 03:04:32.243235  353369 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1128 03:04:32.243249  353369 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1128 03:04:32.243255  353369 command_runner.go:130] > [crio.runtime]
	I1128 03:04:32.243268  353369 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1128 03:04:32.243277  353369 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1128 03:04:32.243287  353369 command_runner.go:130] > # "nofile=1024:2048"
	I1128 03:04:32.243298  353369 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1128 03:04:32.243307  353369 command_runner.go:130] > # default_ulimits = [
	I1128 03:04:32.243338  353369 command_runner.go:130] > # ]
	I1128 03:04:32.243365  353369 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1128 03:04:32.243371  353369 command_runner.go:130] > # no_pivot = false
	I1128 03:04:32.243379  353369 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1128 03:04:32.243390  353369 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1128 03:04:32.243398  353369 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1128 03:04:32.243407  353369 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1128 03:04:32.243416  353369 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1128 03:04:32.243430  353369 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 03:04:32.243448  353369 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1128 03:04:32.243464  353369 command_runner.go:130] > # Cgroup setting for conmon
	I1128 03:04:32.243474  353369 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1128 03:04:32.243481  353369 command_runner.go:130] > conmon_cgroup = "pod"
	I1128 03:04:32.243491  353369 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1128 03:04:32.243499  353369 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1128 03:04:32.243514  353369 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 03:04:32.243523  353369 command_runner.go:130] > conmon_env = [
	I1128 03:04:32.243561  353369 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1128 03:04:32.243571  353369 command_runner.go:130] > ]
	I1128 03:04:32.243580  353369 command_runner.go:130] > # Additional environment variables to set for all the
	I1128 03:04:32.243591  353369 command_runner.go:130] > # containers. These are overridden if set in the
	I1128 03:04:32.243603  353369 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1128 03:04:32.243612  353369 command_runner.go:130] > # default_env = [
	I1128 03:04:32.243617  353369 command_runner.go:130] > # ]
	I1128 03:04:32.243626  353369 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1128 03:04:32.243635  353369 command_runner.go:130] > # selinux = false
	I1128 03:04:32.243644  353369 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1128 03:04:32.243663  353369 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1128 03:04:32.243676  353369 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1128 03:04:32.243686  353369 command_runner.go:130] > # seccomp_profile = ""
	I1128 03:04:32.243697  353369 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1128 03:04:32.243709  353369 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1128 03:04:32.243718  353369 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1128 03:04:32.243729  353369 command_runner.go:130] > # which might increase security.
	I1128 03:04:32.243737  353369 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1128 03:04:32.243750  353369 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1128 03:04:32.243763  353369 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1128 03:04:32.243775  353369 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1128 03:04:32.243787  353369 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1128 03:04:32.243798  353369 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:04:32.243808  353369 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1128 03:04:32.243819  353369 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1128 03:04:32.243830  353369 command_runner.go:130] > # the cgroup blockio controller.
	I1128 03:04:32.243837  353369 command_runner.go:130] > # blockio_config_file = ""
	I1128 03:04:32.243851  353369 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1128 03:04:32.243867  353369 command_runner.go:130] > # irqbalance daemon.
	I1128 03:04:32.243879  353369 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1128 03:04:32.243892  353369 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1128 03:04:32.243903  353369 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:04:32.243912  353369 command_runner.go:130] > # rdt_config_file = ""
	I1128 03:04:32.243925  353369 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1128 03:04:32.243932  353369 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1128 03:04:32.243941  353369 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1128 03:04:32.243951  353369 command_runner.go:130] > # separate_pull_cgroup = ""
	I1128 03:04:32.243961  353369 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1128 03:04:32.243974  353369 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1128 03:04:32.243983  353369 command_runner.go:130] > # will be added.
	I1128 03:04:32.243990  353369 command_runner.go:130] > # default_capabilities = [
	I1128 03:04:32.244004  353369 command_runner.go:130] > # 	"CHOWN",
	I1128 03:04:32.244010  353369 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1128 03:04:32.244025  353369 command_runner.go:130] > # 	"FSETID",
	I1128 03:04:32.244034  353369 command_runner.go:130] > # 	"FOWNER",
	I1128 03:04:32.244041  353369 command_runner.go:130] > # 	"SETGID",
	I1128 03:04:32.244058  353369 command_runner.go:130] > # 	"SETUID",
	I1128 03:04:32.244067  353369 command_runner.go:130] > # 	"SETPCAP",
	I1128 03:04:32.244075  353369 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1128 03:04:32.244083  353369 command_runner.go:130] > # 	"KILL",
	I1128 03:04:32.244089  353369 command_runner.go:130] > # ]
	I1128 03:04:32.244101  353369 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1128 03:04:32.244113  353369 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 03:04:32.244123  353369 command_runner.go:130] > # default_sysctls = [
	I1128 03:04:32.244178  353369 command_runner.go:130] > # ]
	I1128 03:04:32.244191  353369 command_runner.go:130] > # List of devices on the host that a
	I1128 03:04:32.244201  353369 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1128 03:04:32.244211  353369 command_runner.go:130] > # allowed_devices = [
	I1128 03:04:32.244222  353369 command_runner.go:130] > # 	"/dev/fuse",
	I1128 03:04:32.244231  353369 command_runner.go:130] > # ]
	I1128 03:04:32.244239  353369 command_runner.go:130] > # List of additional devices. specified as
	I1128 03:04:32.244251  353369 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1128 03:04:32.244262  353369 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1128 03:04:32.244308  353369 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 03:04:32.244323  353369 command_runner.go:130] > # additional_devices = [
	I1128 03:04:32.244332  353369 command_runner.go:130] > # ]
	I1128 03:04:32.244340  353369 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1128 03:04:32.244350  353369 command_runner.go:130] > # cdi_spec_dirs = [
	I1128 03:04:32.244356  353369 command_runner.go:130] > # 	"/etc/cdi",
	I1128 03:04:32.244364  353369 command_runner.go:130] > # 	"/var/run/cdi",
	I1128 03:04:32.244373  353369 command_runner.go:130] > # ]
	I1128 03:04:32.244384  353369 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1128 03:04:32.244396  353369 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1128 03:04:32.244405  353369 command_runner.go:130] > # Defaults to false.
	I1128 03:04:32.244413  353369 command_runner.go:130] > # device_ownership_from_security_context = false
	I1128 03:04:32.244426  353369 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1128 03:04:32.244438  353369 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1128 03:04:32.244448  353369 command_runner.go:130] > # hooks_dir = [
	I1128 03:04:32.244455  353369 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1128 03:04:32.244465  353369 command_runner.go:130] > # ]
	I1128 03:04:32.244474  353369 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1128 03:04:32.244487  353369 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1128 03:04:32.244502  353369 command_runner.go:130] > # its default mounts from the following two files:
	I1128 03:04:32.244511  353369 command_runner.go:130] > #
	I1128 03:04:32.244521  353369 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1128 03:04:32.244534  353369 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1128 03:04:32.244546  353369 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1128 03:04:32.244554  353369 command_runner.go:130] > #
	I1128 03:04:32.244564  353369 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1128 03:04:32.244579  353369 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1128 03:04:32.244592  353369 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1128 03:04:32.244603  353369 command_runner.go:130] > #      only add mounts it finds in this file.
	I1128 03:04:32.244612  353369 command_runner.go:130] > #
	I1128 03:04:32.244618  353369 command_runner.go:130] > # default_mounts_file = ""
	I1128 03:04:32.244635  353369 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1128 03:04:32.244648  353369 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1128 03:04:32.244657  353369 command_runner.go:130] > pids_limit = 1024
	I1128 03:04:32.244667  353369 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1128 03:04:32.244680  353369 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1128 03:04:32.244692  353369 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1128 03:04:32.244713  353369 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1128 03:04:32.244723  353369 command_runner.go:130] > # log_size_max = -1
	I1128 03:04:32.244734  353369 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1128 03:04:32.244743  353369 command_runner.go:130] > # log_to_journald = false
	I1128 03:04:32.244753  353369 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1128 03:04:32.244765  353369 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1128 03:04:32.244773  353369 command_runner.go:130] > # Path to directory for container attach sockets.
	I1128 03:04:32.244784  353369 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1128 03:04:32.244792  353369 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1128 03:04:32.244802  353369 command_runner.go:130] > # bind_mount_prefix = ""
	I1128 03:04:32.244811  353369 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1128 03:04:32.244823  353369 command_runner.go:130] > # read_only = false
	I1128 03:04:32.244837  353369 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1128 03:04:32.244850  353369 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1128 03:04:32.244861  353369 command_runner.go:130] > # live configuration reload.
	I1128 03:04:32.244867  353369 command_runner.go:130] > # log_level = "info"
	I1128 03:04:32.244893  353369 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1128 03:04:32.244902  353369 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:04:32.244974  353369 command_runner.go:130] > # log_filter = ""
	I1128 03:04:32.244989  353369 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1128 03:04:32.245004  353369 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1128 03:04:32.245014  353369 command_runner.go:130] > # separated by comma.
	I1128 03:04:32.245021  353369 command_runner.go:130] > # uid_mappings = ""
	I1128 03:04:32.245033  353369 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1128 03:04:32.245044  353369 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1128 03:04:32.245052  353369 command_runner.go:130] > # separated by comma.
	I1128 03:04:32.245057  353369 command_runner.go:130] > # gid_mappings = ""
	I1128 03:04:32.245067  353369 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1128 03:04:32.245075  353369 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 03:04:32.245086  353369 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 03:04:32.245096  353369 command_runner.go:130] > # minimum_mappable_uid = -1
	I1128 03:04:32.245108  353369 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1128 03:04:32.245121  353369 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 03:04:32.245132  353369 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 03:04:32.245143  353369 command_runner.go:130] > # minimum_mappable_gid = -1
	I1128 03:04:32.245156  353369 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1128 03:04:32.245176  353369 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1128 03:04:32.245189  353369 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1128 03:04:32.245199  353369 command_runner.go:130] > # ctr_stop_timeout = 30
	I1128 03:04:32.245210  353369 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1128 03:04:32.245222  353369 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1128 03:04:32.245233  353369 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1128 03:04:32.245241  353369 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1128 03:04:32.245252  353369 command_runner.go:130] > drop_infra_ctr = false
	I1128 03:04:32.245263  353369 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1128 03:04:32.245275  353369 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1128 03:04:32.245287  353369 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1128 03:04:32.245297  353369 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1128 03:04:32.245307  353369 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1128 03:04:32.245319  353369 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1128 03:04:32.245326  353369 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1128 03:04:32.245337  353369 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1128 03:04:32.245348  353369 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1128 03:04:32.245362  353369 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1128 03:04:32.245379  353369 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1128 03:04:32.245393  353369 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1128 03:04:32.245402  353369 command_runner.go:130] > # default_runtime = "runc"
	I1128 03:04:32.245411  353369 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1128 03:04:32.245425  353369 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1128 03:04:32.245443  353369 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1128 03:04:32.245455  353369 command_runner.go:130] > # creation as a file is not desired either.
	I1128 03:04:32.245470  353369 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1128 03:04:32.245485  353369 command_runner.go:130] > # the hostname is being managed dynamically.
	I1128 03:04:32.245496  353369 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1128 03:04:32.245501  353369 command_runner.go:130] > # ]
	I1128 03:04:32.245512  353369 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1128 03:04:32.245525  353369 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1128 03:04:32.245538  353369 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1128 03:04:32.245551  353369 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1128 03:04:32.245558  353369 command_runner.go:130] > #
	I1128 03:04:32.245565  353369 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1128 03:04:32.245578  353369 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1128 03:04:32.245592  353369 command_runner.go:130] > #  runtime_type = "oci"
	I1128 03:04:32.245602  353369 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1128 03:04:32.245609  353369 command_runner.go:130] > #  privileged_without_host_devices = false
	I1128 03:04:32.245618  353369 command_runner.go:130] > #  allowed_annotations = []
	I1128 03:04:32.245624  353369 command_runner.go:130] > # Where:
	I1128 03:04:32.245635  353369 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1128 03:04:32.245648  353369 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1128 03:04:32.245663  353369 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1128 03:04:32.245676  353369 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1128 03:04:32.245685  353369 command_runner.go:130] > #   in $PATH.
	I1128 03:04:32.245695  353369 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1128 03:04:32.245705  353369 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1128 03:04:32.245716  353369 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1128 03:04:32.245725  353369 command_runner.go:130] > #   state.
	I1128 03:04:32.245740  353369 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1128 03:04:32.245749  353369 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1128 03:04:32.245755  353369 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1128 03:04:32.245789  353369 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1128 03:04:32.245801  353369 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1128 03:04:32.245812  353369 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1128 03:04:32.245822  353369 command_runner.go:130] > #   The currently recognized values are:
	I1128 03:04:32.245834  353369 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1128 03:04:32.245852  353369 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1128 03:04:32.245865  353369 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1128 03:04:32.245877  353369 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1128 03:04:32.245891  353369 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1128 03:04:32.245902  353369 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1128 03:04:32.245915  353369 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1128 03:04:32.245929  353369 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1128 03:04:32.245939  353369 command_runner.go:130] > #   should be moved to the container's cgroup
	I1128 03:04:32.245944  353369 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1128 03:04:32.245950  353369 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1128 03:04:32.245954  353369 command_runner.go:130] > runtime_type = "oci"
	I1128 03:04:32.245959  353369 command_runner.go:130] > runtime_root = "/run/runc"
	I1128 03:04:32.245964  353369 command_runner.go:130] > runtime_config_path = ""
	I1128 03:04:32.245968  353369 command_runner.go:130] > monitor_path = ""
	I1128 03:04:32.245979  353369 command_runner.go:130] > monitor_cgroup = ""
	I1128 03:04:32.245990  353369 command_runner.go:130] > monitor_exec_cgroup = ""
	I1128 03:04:32.246008  353369 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1128 03:04:32.246018  353369 command_runner.go:130] > # running containers
	I1128 03:04:32.246025  353369 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1128 03:04:32.246038  353369 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1128 03:04:32.246105  353369 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1128 03:04:32.246127  353369 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1128 03:04:32.246134  353369 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1128 03:04:32.246142  353369 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1128 03:04:32.246154  353369 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1128 03:04:32.246161  353369 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1128 03:04:32.246173  353369 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1128 03:04:32.246182  353369 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1128 03:04:32.246189  353369 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1128 03:04:32.246196  353369 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1128 03:04:32.246203  353369 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1128 03:04:32.246212  353369 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1128 03:04:32.246223  353369 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1128 03:04:32.246231  353369 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1128 03:04:32.246241  353369 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1128 03:04:32.246250  353369 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1128 03:04:32.246259  353369 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1128 03:04:32.246267  353369 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1128 03:04:32.246271  353369 command_runner.go:130] > # Example:
	I1128 03:04:32.246276  353369 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1128 03:04:32.246283  353369 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1128 03:04:32.246288  353369 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1128 03:04:32.246295  353369 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1128 03:04:32.246299  353369 command_runner.go:130] > # cpuset = 0
	I1128 03:04:32.246303  353369 command_runner.go:130] > # cpushares = "0-1"
	I1128 03:04:32.246307  353369 command_runner.go:130] > # Where:
	I1128 03:04:32.246312  353369 command_runner.go:130] > # The workload name is workload-type.
	I1128 03:04:32.246319  353369 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1128 03:04:32.246326  353369 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1128 03:04:32.246332  353369 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1128 03:04:32.246354  353369 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1128 03:04:32.246368  353369 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1128 03:04:32.246374  353369 command_runner.go:130] > # 
	I1128 03:04:32.246380  353369 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1128 03:04:32.246386  353369 command_runner.go:130] > #
	I1128 03:04:32.246392  353369 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1128 03:04:32.246401  353369 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1128 03:04:32.246410  353369 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1128 03:04:32.246416  353369 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1128 03:04:32.246424  353369 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1128 03:04:32.246428  353369 command_runner.go:130] > [crio.image]
	I1128 03:04:32.246436  353369 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1128 03:04:32.246441  353369 command_runner.go:130] > # default_transport = "docker://"
	I1128 03:04:32.246447  353369 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1128 03:04:32.246482  353369 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1128 03:04:32.246489  353369 command_runner.go:130] > # global_auth_file = ""
	I1128 03:04:32.246494  353369 command_runner.go:130] > # The image used to instantiate infra containers.
	I1128 03:04:32.246502  353369 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:04:32.246516  353369 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1128 03:04:32.246529  353369 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1128 03:04:32.246541  353369 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1128 03:04:32.246548  353369 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:04:32.246557  353369 command_runner.go:130] > # pause_image_auth_file = ""
	I1128 03:04:32.246566  353369 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1128 03:04:32.246575  353369 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1128 03:04:32.246583  353369 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1128 03:04:32.246592  353369 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1128 03:04:32.246598  353369 command_runner.go:130] > # pause_command = "/pause"
	I1128 03:04:32.246607  353369 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1128 03:04:32.246616  353369 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1128 03:04:32.246625  353369 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1128 03:04:32.246639  353369 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1128 03:04:32.246648  353369 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1128 03:04:32.246652  353369 command_runner.go:130] > # signature_policy = ""
	I1128 03:04:32.246660  353369 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1128 03:04:32.246667  353369 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1128 03:04:32.246678  353369 command_runner.go:130] > # changing them here.
	I1128 03:04:32.246685  353369 command_runner.go:130] > # insecure_registries = [
	I1128 03:04:32.246688  353369 command_runner.go:130] > # ]
	I1128 03:04:32.246694  353369 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1128 03:04:32.246701  353369 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1128 03:04:32.246706  353369 command_runner.go:130] > # image_volumes = "mkdir"
	I1128 03:04:32.246713  353369 command_runner.go:130] > # Temporary directory to use for storing big files
	I1128 03:04:32.246717  353369 command_runner.go:130] > # big_files_temporary_dir = ""
	I1128 03:04:32.246725  353369 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1128 03:04:32.246729  353369 command_runner.go:130] > # CNI plugins.
	I1128 03:04:32.246734  353369 command_runner.go:130] > [crio.network]
	I1128 03:04:32.246740  353369 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1128 03:04:32.246747  353369 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1128 03:04:32.246752  353369 command_runner.go:130] > # cni_default_network = ""
	I1128 03:04:32.246760  353369 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1128 03:04:32.246764  353369 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1128 03:04:32.246770  353369 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1128 03:04:32.246774  353369 command_runner.go:130] > # plugin_dirs = [
	I1128 03:04:32.246780  353369 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1128 03:04:32.246789  353369 command_runner.go:130] > # ]
	I1128 03:04:32.246794  353369 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1128 03:04:32.246800  353369 command_runner.go:130] > [crio.metrics]
	I1128 03:04:32.246805  353369 command_runner.go:130] > # Globally enable or disable metrics support.
	I1128 03:04:32.246809  353369 command_runner.go:130] > enable_metrics = true
	I1128 03:04:32.246815  353369 command_runner.go:130] > # Specify enabled metrics collectors.
	I1128 03:04:32.246820  353369 command_runner.go:130] > # Per default all metrics are enabled.
	I1128 03:04:32.246828  353369 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1128 03:04:32.246836  353369 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1128 03:04:32.246844  353369 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1128 03:04:32.246849  353369 command_runner.go:130] > # metrics_collectors = [
	I1128 03:04:32.246855  353369 command_runner.go:130] > # 	"operations",
	I1128 03:04:32.246860  353369 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1128 03:04:32.246866  353369 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1128 03:04:32.246872  353369 command_runner.go:130] > # 	"operations_errors",
	I1128 03:04:32.246878  353369 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1128 03:04:32.246883  353369 command_runner.go:130] > # 	"image_pulls_by_name",
	I1128 03:04:32.246893  353369 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1128 03:04:32.246897  353369 command_runner.go:130] > # 	"image_pulls_failures",
	I1128 03:04:32.246903  353369 command_runner.go:130] > # 	"image_pulls_successes",
	I1128 03:04:32.246907  353369 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1128 03:04:32.246912  353369 command_runner.go:130] > # 	"image_layer_reuse",
	I1128 03:04:32.246917  353369 command_runner.go:130] > # 	"containers_oom_total",
	I1128 03:04:32.246923  353369 command_runner.go:130] > # 	"containers_oom",
	I1128 03:04:32.246927  353369 command_runner.go:130] > # 	"processes_defunct",
	I1128 03:04:32.246932  353369 command_runner.go:130] > # 	"operations_total",
	I1128 03:04:32.246936  353369 command_runner.go:130] > # 	"operations_latency_seconds",
	I1128 03:04:32.246942  353369 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1128 03:04:32.246946  353369 command_runner.go:130] > # 	"operations_errors_total",
	I1128 03:04:32.246950  353369 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1128 03:04:32.246955  353369 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1128 03:04:32.246961  353369 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1128 03:04:32.246965  353369 command_runner.go:130] > # 	"image_pulls_success_total",
	I1128 03:04:32.246971  353369 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1128 03:04:32.246981  353369 command_runner.go:130] > # 	"containers_oom_count_total",
	I1128 03:04:32.246990  353369 command_runner.go:130] > # ]
	I1128 03:04:32.247005  353369 command_runner.go:130] > # The port on which the metrics server will listen.
	I1128 03:04:32.247015  353369 command_runner.go:130] > # metrics_port = 9090
	I1128 03:04:32.247023  353369 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1128 03:04:32.247033  353369 command_runner.go:130] > # metrics_socket = ""
	I1128 03:04:32.247041  353369 command_runner.go:130] > # The certificate for the secure metrics server.
	I1128 03:04:32.247053  353369 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1128 03:04:32.247066  353369 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1128 03:04:32.247073  353369 command_runner.go:130] > # certificate on any modification event.
	I1128 03:04:32.247077  353369 command_runner.go:130] > # metrics_cert = ""
	I1128 03:04:32.247085  353369 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1128 03:04:32.247090  353369 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1128 03:04:32.247096  353369 command_runner.go:130] > # metrics_key = ""
	I1128 03:04:32.247102  353369 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1128 03:04:32.247108  353369 command_runner.go:130] > [crio.tracing]
	I1128 03:04:32.247113  353369 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1128 03:04:32.247119  353369 command_runner.go:130] > # enable_tracing = false
	I1128 03:04:32.247127  353369 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1128 03:04:32.247136  353369 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1128 03:04:32.247164  353369 command_runner.go:130] > # Number of samples to collect per million spans.
	I1128 03:04:32.247171  353369 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1128 03:04:32.247177  353369 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1128 03:04:32.247183  353369 command_runner.go:130] > [crio.stats]
	I1128 03:04:32.247192  353369 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1128 03:04:32.247203  353369 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1128 03:04:32.247211  353369 command_runner.go:130] > # stats_collection_period = 0
	I1128 03:04:32.247580  353369 command_runner.go:130] ! time="2023-11-28 03:04:32.225623314Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1128 03:04:32.247607  353369 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1128 03:04:32.247766  353369 cni.go:84] Creating CNI manager for ""
	I1128 03:04:32.247782  353369 cni.go:136] 1 nodes found, recommending kindnet
	I1128 03:04:32.247804  353369 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 03:04:32.247824  353369 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.73 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-112998 NodeName:multinode-112998 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 03:04:32.247959  353369 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-112998"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 03:04:32.248066  353369 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-112998 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-112998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 03:04:32.248151  353369 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 03:04:32.257526  353369 command_runner.go:130] > kubeadm
	I1128 03:04:32.257550  353369 command_runner.go:130] > kubectl
	I1128 03:04:32.257557  353369 command_runner.go:130] > kubelet
	I1128 03:04:32.257576  353369 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 03:04:32.257658  353369 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 03:04:32.267913  353369 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1128 03:04:32.284249  353369 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 03:04:32.299886  353369 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1128 03:04:32.315875  353369 ssh_runner.go:195] Run: grep 192.168.39.73	control-plane.minikube.internal$ /etc/hosts
	I1128 03:04:32.319691  353369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 03:04:32.332029  353369 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998 for IP: 192.168.39.73
	I1128 03:04:32.332070  353369 certs.go:190] acquiring lock for shared ca certs: {Name:mk57c0483467fb0022a439f1b546194ca653d1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:04:32.332250  353369 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key
	I1128 03:04:32.332309  353369 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key
	I1128 03:04:32.332367  353369 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key
	I1128 03:04:32.332385  353369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt with IP's: []
	I1128 03:04:32.471867  353369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt ...
	I1128 03:04:32.471910  353369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt: {Name:mk6601832c2ebbb3a44ecb72f7d5ecd80888ea2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:04:32.472099  353369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key ...
	I1128 03:04:32.472112  353369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key: {Name:mk82094a0fc4b5c19e12afc0e0d22fe2a4e266ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:04:32.472191  353369 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.key.8b49dc8b
	I1128 03:04:32.472207  353369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.crt.8b49dc8b with IP's: [192.168.39.73 10.96.0.1 127.0.0.1 10.0.0.1]
	I1128 03:04:32.512494  353369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.crt.8b49dc8b ...
	I1128 03:04:32.512532  353369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.crt.8b49dc8b: {Name:mk702a911ac8ba0a61f41a02fb4b27ff965c91db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:04:32.512699  353369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.key.8b49dc8b ...
	I1128 03:04:32.512713  353369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.key.8b49dc8b: {Name:mk30d6bddbe15ab3b2a365516bd7e41938a099e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:04:32.512791  353369 certs.go:337] copying /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.crt.8b49dc8b -> /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.crt
	I1128 03:04:32.512903  353369 certs.go:341] copying /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.key.8b49dc8b -> /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.key
	I1128 03:04:32.512973  353369 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/proxy-client.key
	I1128 03:04:32.512999  353369 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/proxy-client.crt with IP's: []
	I1128 03:04:32.628608  353369 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/proxy-client.crt ...
	I1128 03:04:32.628648  353369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/proxy-client.crt: {Name:mk2dcc2c0bd24fb73bf2406d3b069b5c7be3e64a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:04:32.628815  353369 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/proxy-client.key ...
	I1128 03:04:32.628834  353369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/proxy-client.key: {Name:mk8d8494403c10d69fd7ca5faacfe12d26ea181f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:04:32.628978  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1128 03:04:32.629005  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1128 03:04:32.629022  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1128 03:04:32.629038  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1128 03:04:32.629056  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1128 03:04:32.629072  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1128 03:04:32.629087  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1128 03:04:32.629102  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1128 03:04:32.629163  353369 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem (1338 bytes)
	W1128 03:04:32.629203  353369 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515_empty.pem, impossibly tiny 0 bytes
	I1128 03:04:32.629216  353369 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 03:04:32.629248  353369 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem (1078 bytes)
	I1128 03:04:32.629273  353369 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem (1123 bytes)
	I1128 03:04:32.629302  353369 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem (1675 bytes)
	I1128 03:04:32.629359  353369 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 03:04:32.629393  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:04:32.629408  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem -> /usr/share/ca-certificates/340515.pem
	I1128 03:04:32.629423  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> /usr/share/ca-certificates/3405152.pem
	I1128 03:04:32.630065  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 03:04:32.655299  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1128 03:04:32.679228  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 03:04:32.703037  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 03:04:32.724990  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 03:04:32.747184  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 03:04:32.769173  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 03:04:32.791417  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 03:04:32.813901  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 03:04:32.838332  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem --> /usr/share/ca-certificates/340515.pem (1338 bytes)
	I1128 03:04:32.864004  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /usr/share/ca-certificates/3405152.pem (1708 bytes)
	I1128 03:04:32.889436  353369 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 03:04:32.907196  353369 ssh_runner.go:195] Run: openssl version
	I1128 03:04:32.913016  353369 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1128 03:04:32.913094  353369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 03:04:32.922836  353369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:04:32.927321  353369 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:04:32.927383  353369 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:04:32.927446  353369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:04:32.933128  353369 command_runner.go:130] > b5213941
	I1128 03:04:32.933358  353369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 03:04:32.943318  353369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340515.pem && ln -fs /usr/share/ca-certificates/340515.pem /etc/ssl/certs/340515.pem"
	I1128 03:04:32.953500  353369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340515.pem
	I1128 03:04:32.958076  353369 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 03:04:32.958146  353369 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 03:04:32.958211  353369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340515.pem
	I1128 03:04:32.963411  353369 command_runner.go:130] > 51391683
	I1128 03:04:32.963704  353369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340515.pem /etc/ssl/certs/51391683.0"
	I1128 03:04:32.973004  353369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3405152.pem && ln -fs /usr/share/ca-certificates/3405152.pem /etc/ssl/certs/3405152.pem"
	I1128 03:04:32.982403  353369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3405152.pem
	I1128 03:04:32.986894  353369 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 03:04:32.986924  353369 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 03:04:32.986985  353369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3405152.pem
	I1128 03:04:32.992167  353369 command_runner.go:130] > 3ec20f2e
	I1128 03:04:32.992426  353369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3405152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 03:04:33.002104  353369 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 03:04:33.006137  353369 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 03:04:33.006290  353369 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 03:04:33.006352  353369 kubeadm.go:404] StartCluster: {Name:multinode-112998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-112998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 03:04:33.006445  353369 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 03:04:33.006516  353369 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 03:04:33.049484  353369 cri.go:89] found id: ""
	I1128 03:04:33.049574  353369 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 03:04:33.058069  353369 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I1128 03:04:33.058095  353369 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I1128 03:04:33.058101  353369 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I1128 03:04:33.058164  353369 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 03:04:33.066513  353369 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 03:04:33.074837  353369 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1128 03:04:33.074868  353369 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1128 03:04:33.074875  353369 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1128 03:04:33.074882  353369 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 03:04:33.075137  353369 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 03:04:33.075180  353369 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 03:04:33.186449  353369 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 03:04:33.186484  353369 command_runner.go:130] > [init] Using Kubernetes version: v1.28.4
	I1128 03:04:33.186531  353369 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 03:04:33.186542  353369 command_runner.go:130] > [preflight] Running pre-flight checks
	I1128 03:04:33.430813  353369 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 03:04:33.430839  353369 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 03:04:33.431006  353369 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 03:04:33.431026  353369 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 03:04:33.431156  353369 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 03:04:33.431166  353369 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 03:04:33.655685  353369 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 03:04:33.655742  353369 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 03:04:33.739334  353369 out.go:204]   - Generating certificates and keys ...
	I1128 03:04:33.739502  353369 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1128 03:04:33.739528  353369 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 03:04:33.739613  353369 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1128 03:04:33.739625  353369 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 03:04:33.746360  353369 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1128 03:04:33.746372  353369 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I1128 03:04:33.904869  353369 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I1128 03:04:33.904921  353369 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I1128 03:04:34.221395  353369 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I1128 03:04:34.221436  353369 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I1128 03:04:34.302239  353369 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I1128 03:04:34.302275  353369 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I1128 03:04:34.553146  353369 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I1128 03:04:34.553168  353369 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I1128 03:04:34.620974  353369 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-112998] and IPs [192.168.39.73 127.0.0.1 ::1]
	I1128 03:04:34.621009  353369 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-112998] and IPs [192.168.39.73 127.0.0.1 ::1]
	I1128 03:04:34.639170  353369 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I1128 03:04:34.639197  353369 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I1128 03:04:34.639683  353369 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-112998] and IPs [192.168.39.73 127.0.0.1 ::1]
	I1128 03:04:34.639731  353369 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-112998] and IPs [192.168.39.73 127.0.0.1 ::1]
	I1128 03:04:34.850383  353369 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1128 03:04:34.850419  353369 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I1128 03:04:35.065904  353369 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I1128 03:04:35.065937  353369 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I1128 03:04:35.389956  353369 kubeadm.go:322] [certs] Generating "sa" key and public key
	I1128 03:04:35.390000  353369 command_runner.go:130] > [certs] Generating "sa" key and public key
	I1128 03:04:35.390261  353369 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 03:04:35.390303  353369 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 03:04:35.510062  353369 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 03:04:35.510106  353369 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 03:04:35.566878  353369 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 03:04:35.566915  353369 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 03:04:35.709951  353369 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 03:04:35.710046  353369 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 03:04:35.763769  353369 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 03:04:35.763797  353369 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 03:04:35.764512  353369 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 03:04:35.764539  353369 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 03:04:35.767857  353369 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 03:04:35.767880  353369 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 03:04:35.770180  353369 out.go:204]   - Booting up control plane ...
	I1128 03:04:35.770305  353369 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 03:04:35.770331  353369 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 03:04:35.770474  353369 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 03:04:35.770489  353369 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 03:04:35.772109  353369 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 03:04:35.772124  353369 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 03:04:35.786930  353369 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 03:04:35.786956  353369 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 03:04:35.787928  353369 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 03:04:35.787953  353369 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 03:04:35.788039  353369 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 03:04:35.788054  353369 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1128 03:04:35.905531  353369 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 03:04:35.905564  353369 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 03:04:42.909098  353369 kubeadm.go:322] [apiclient] All control plane components are healthy after 7.004115 seconds
	I1128 03:04:42.909139  353369 command_runner.go:130] > [apiclient] All control plane components are healthy after 7.004115 seconds
	I1128 03:04:42.909270  353369 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 03:04:42.909281  353369 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 03:04:42.931980  353369 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 03:04:42.932027  353369 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 03:04:43.462453  353369 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 03:04:43.462483  353369 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I1128 03:04:43.462632  353369 kubeadm.go:322] [mark-control-plane] Marking the node multinode-112998 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 03:04:43.462670  353369 command_runner.go:130] > [mark-control-plane] Marking the node multinode-112998 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 03:04:43.978709  353369 kubeadm.go:322] [bootstrap-token] Using token: oult9p.3speeika05qdjatc
	I1128 03:04:43.978762  353369 command_runner.go:130] > [bootstrap-token] Using token: oult9p.3speeika05qdjatc
	I1128 03:04:43.980326  353369 out.go:204]   - Configuring RBAC rules ...
	I1128 03:04:43.980474  353369 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 03:04:43.980492  353369 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 03:04:43.991749  353369 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 03:04:43.991778  353369 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 03:04:43.999789  353369 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 03:04:43.999833  353369 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 03:04:44.007170  353369 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 03:04:44.007197  353369 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 03:04:44.010971  353369 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 03:04:44.010997  353369 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 03:04:44.015401  353369 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 03:04:44.015428  353369 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 03:04:44.037121  353369 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 03:04:44.037146  353369 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 03:04:44.263254  353369 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 03:04:44.263292  353369 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1128 03:04:44.416075  353369 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 03:04:44.416109  353369 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1128 03:04:44.417246  353369 kubeadm.go:322] 
	I1128 03:04:44.417364  353369 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 03:04:44.417381  353369 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I1128 03:04:44.417387  353369 kubeadm.go:322] 
	I1128 03:04:44.417484  353369 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 03:04:44.417531  353369 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I1128 03:04:44.417563  353369 kubeadm.go:322] 
	I1128 03:04:44.417608  353369 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 03:04:44.417633  353369 command_runner.go:130] >   mkdir -p $HOME/.kube
	I1128 03:04:44.417705  353369 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 03:04:44.417757  353369 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 03:04:44.417873  353369 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 03:04:44.417894  353369 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 03:04:44.417902  353369 kubeadm.go:322] 
	I1128 03:04:44.417988  353369 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 03:04:44.418011  353369 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I1128 03:04:44.418021  353369 kubeadm.go:322] 
	I1128 03:04:44.418098  353369 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 03:04:44.418108  353369 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 03:04:44.418112  353369 kubeadm.go:322] 
	I1128 03:04:44.418174  353369 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 03:04:44.418183  353369 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I1128 03:04:44.418270  353369 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 03:04:44.418280  353369 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 03:04:44.418366  353369 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 03:04:44.418380  353369 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 03:04:44.418389  353369 kubeadm.go:322] 
	I1128 03:04:44.418485  353369 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 03:04:44.418493  353369 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I1128 03:04:44.418593  353369 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 03:04:44.418611  353369 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I1128 03:04:44.418618  353369 kubeadm.go:322] 
	I1128 03:04:44.418730  353369 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token oult9p.3speeika05qdjatc \
	I1128 03:04:44.418742  353369 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token oult9p.3speeika05qdjatc \
	I1128 03:04:44.418876  353369 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 03:04:44.418887  353369 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 03:04:44.418922  353369 kubeadm.go:322] 	--control-plane 
	I1128 03:04:44.418932  353369 command_runner.go:130] > 	--control-plane 
	I1128 03:04:44.418937  353369 kubeadm.go:322] 
	I1128 03:04:44.419061  353369 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 03:04:44.419084  353369 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I1128 03:04:44.419105  353369 kubeadm.go:322] 
	I1128 03:04:44.419223  353369 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token oult9p.3speeika05qdjatc \
	I1128 03:04:44.419244  353369 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token oult9p.3speeika05qdjatc \
	I1128 03:04:44.419388  353369 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 03:04:44.419402  353369 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 03:04:44.420520  353369 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 03:04:44.420552  353369 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 03:04:44.420588  353369 cni.go:84] Creating CNI manager for ""
	I1128 03:04:44.420600  353369 cni.go:136] 1 nodes found, recommending kindnet
	I1128 03:04:44.422153  353369 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1128 03:04:44.423665  353369 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 03:04:44.445912  353369 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1128 03:04:44.445966  353369 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1128 03:04:44.445978  353369 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1128 03:04:44.445988  353369 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 03:04:44.445998  353369 command_runner.go:130] > Access: 2023-11-28 03:04:12.954160347 +0000
	I1128 03:04:44.446010  353369 command_runner.go:130] > Modify: 2023-11-16 19:19:18.000000000 +0000
	I1128 03:04:44.446018  353369 command_runner.go:130] > Change: 2023-11-28 03:04:11.106160347 +0000
	I1128 03:04:44.446026  353369 command_runner.go:130] >  Birth: -
	I1128 03:04:44.451548  353369 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 03:04:44.451579  353369 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 03:04:44.538319  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 03:04:45.659487  353369 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I1128 03:04:45.659526  353369 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I1128 03:04:45.659534  353369 command_runner.go:130] > serviceaccount/kindnet created
	I1128 03:04:45.659539  353369 command_runner.go:130] > daemonset.apps/kindnet created
	I1128 03:04:45.659574  353369 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.121213174s)
	I1128 03:04:45.659620  353369 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 03:04:45.659715  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:45.659749  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=multinode-112998 minikube.k8s.io/updated_at=2023_11_28T03_04_45_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:45.674057  353369 command_runner.go:130] > -16
	I1128 03:04:45.674193  353369 ops.go:34] apiserver oom_adj: -16
	I1128 03:04:45.810392  353369 command_runner.go:130] > node/multinode-112998 labeled
	I1128 03:04:45.810442  353369 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I1128 03:04:45.810548  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:45.908124  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:45.910176  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:45.997835  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:46.500057  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:46.589077  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:46.999539  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:47.081771  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:47.500403  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:47.584854  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:47.999411  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:48.083736  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:48.499762  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:48.582788  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:48.999757  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:49.086838  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:49.500260  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:49.587984  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:50.000149  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:50.086591  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:50.500418  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:50.586257  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:50.999515  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:51.093108  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:51.499476  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:51.584923  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:51.999472  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:52.080188  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:52.500220  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:52.598748  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:53.000003  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:53.081690  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:53.499633  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:53.605144  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:54.000364  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:54.105562  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:54.499891  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:54.624048  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:54.999481  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:55.109072  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:55.499522  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:55.594068  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:55.999908  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:56.154301  353369 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I1128 03:04:56.500178  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 03:04:56.608196  353369 command_runner.go:130] > NAME      SECRETS   AGE
	I1128 03:04:56.608222  353369 command_runner.go:130] > default   0         0s
	I1128 03:04:56.609638  353369 kubeadm.go:1081] duration metric: took 10.949984306s to wait for elevateKubeSystemPrivileges.
	I1128 03:04:56.609671  353369 kubeadm.go:406] StartCluster complete in 23.603328513s
	I1128 03:04:56.609690  353369 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:04:56.609786  353369 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:04:56.610521  353369 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:04:56.610784  353369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 03:04:56.610884  353369 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 03:04:56.610988  353369 addons.go:69] Setting storage-provisioner=true in profile "multinode-112998"
	I1128 03:04:56.611010  353369 config.go:182] Loaded profile config "multinode-112998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:04:56.611043  353369 addons.go:231] Setting addon storage-provisioner=true in "multinode-112998"
	I1128 03:04:56.611053  353369 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:04:56.611049  353369 addons.go:69] Setting default-storageclass=true in profile "multinode-112998"
	I1128 03:04:56.611103  353369 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-112998"
	I1128 03:04:56.611113  353369 host.go:66] Checking if "multinode-112998" exists ...
	I1128 03:04:56.611438  353369 kapi.go:59] client config for multinode-112998: &rest.Config{Host:"https://192.168.39.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 03:04:56.611666  353369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:04:56.611717  353369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:04:56.611821  353369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:04:56.611863  353369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:04:56.612323  353369 cert_rotation.go:137] Starting client certificate rotation controller
	I1128 03:04:56.612804  353369 round_trippers.go:463] GET https://192.168.39.73:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1128 03:04:56.612827  353369 round_trippers.go:469] Request Headers:
	I1128 03:04:56.612838  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:04:56.612851  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:04:56.625055  353369 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1128 03:04:56.625083  353369 round_trippers.go:577] Response Headers:
	I1128 03:04:56.625094  353369 round_trippers.go:580]     Audit-Id: f806dd7d-ff8d-430c-9f05-1ed13e71a3fa
	I1128 03:04:56.625103  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:04:56.625111  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:04:56.625119  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:04:56.625129  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:04:56.625139  353369 round_trippers.go:580]     Content-Length: 291
	I1128 03:04:56.625148  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:04:56 GMT
	I1128 03:04:56.625215  353369 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"722e10cd-af13-449a-984b-faf3aaa4e33e","resourceVersion":"269","creationTimestamp":"2023-11-28T03:04:44Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1128 03:04:56.625823  353369 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"722e10cd-af13-449a-984b-faf3aaa4e33e","resourceVersion":"269","creationTimestamp":"2023-11-28T03:04:44Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1128 03:04:56.625927  353369 round_trippers.go:463] PUT https://192.168.39.73:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1128 03:04:56.625946  353369 round_trippers.go:469] Request Headers:
	I1128 03:04:56.625957  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:04:56.625965  353369 round_trippers.go:473]     Content-Type: application/json
	I1128 03:04:56.625984  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:04:56.628170  353369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I1128 03:04:56.628565  353369 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:04:56.629078  353369 main.go:141] libmachine: Using API Version  1
	I1128 03:04:56.629102  353369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:04:56.629473  353369 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:04:56.630028  353369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:04:56.630099  353369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:04:56.630369  353369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40715
	I1128 03:04:56.630770  353369 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:04:56.632247  353369 main.go:141] libmachine: Using API Version  1
	I1128 03:04:56.632276  353369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:04:56.632668  353369 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:04:56.632946  353369 main.go:141] libmachine: (multinode-112998) Calling .GetState
	I1128 03:04:56.635414  353369 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:04:56.635741  353369 kapi.go:59] client config for multinode-112998: &rest.Config{Host:"https://192.168.39.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 03:04:56.636104  353369 addons.go:231] Setting addon default-storageclass=true in "multinode-112998"
	I1128 03:04:56.636146  353369 host.go:66] Checking if "multinode-112998" exists ...
	I1128 03:04:56.636574  353369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:04:56.636630  353369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:04:56.640157  353369 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I1128 03:04:56.640178  353369 round_trippers.go:577] Response Headers:
	I1128 03:04:56.640190  353369 round_trippers.go:580]     Audit-Id: 29bd3924-f3be-4378-bc20-b77ed0b16604
	I1128 03:04:56.640199  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:04:56.640211  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:04:56.640220  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:04:56.640233  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:04:56.640254  353369 round_trippers.go:580]     Content-Length: 291
	I1128 03:04:56.640266  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:04:56 GMT
	I1128 03:04:56.640298  353369 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"722e10cd-af13-449a-984b-faf3aaa4e33e","resourceVersion":"367","creationTimestamp":"2023-11-28T03:04:44Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1128 03:04:56.640463  353369 round_trippers.go:463] GET https://192.168.39.73:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1128 03:04:56.640479  353369 round_trippers.go:469] Request Headers:
	I1128 03:04:56.640490  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:04:56.640498  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:04:56.644825  353369 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:04:56.644849  353369 round_trippers.go:577] Response Headers:
	I1128 03:04:56.644859  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:04:56.644869  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:04:56.644876  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:04:56.644898  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:04:56.644908  353369 round_trippers.go:580]     Content-Length: 291
	I1128 03:04:56.644919  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:04:56 GMT
	I1128 03:04:56.644928  353369 round_trippers.go:580]     Audit-Id: c441181e-688f-4a32-a3f5-1df69b33cc06
	I1128 03:04:56.644959  353369 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"722e10cd-af13-449a-984b-faf3aaa4e33e","resourceVersion":"367","creationTimestamp":"2023-11-28T03:04:44Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I1128 03:04:56.645078  353369 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-112998" context rescaled to 1 replicas
	I1128 03:04:56.645115  353369 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 03:04:56.646808  353369 out.go:177] * Verifying Kubernetes components...
	I1128 03:04:56.648218  353369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 03:04:56.646354  353369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46351
	I1128 03:04:56.648944  353369 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:04:56.649491  353369 main.go:141] libmachine: Using API Version  1
	I1128 03:04:56.649514  353369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:04:56.649920  353369 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:04:56.650220  353369 main.go:141] libmachine: (multinode-112998) Calling .GetState
	I1128 03:04:56.651858  353369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38721
	I1128 03:04:56.652235  353369 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:04:56.652352  353369 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:04:56.654337  353369 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 03:04:56.652784  353369 main.go:141] libmachine: Using API Version  1
	I1128 03:04:56.655866  353369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:04:56.655973  353369 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 03:04:56.655991  353369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 03:04:56.656004  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:04:56.656274  353369 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:04:56.656805  353369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:04:56.656853  353369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:04:56.659398  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:56.659824  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:56.659858  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:56.660133  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:04:56.660336  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:56.660503  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:04:56.660669  353369 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:04:56.671812  353369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42673
	I1128 03:04:56.672240  353369 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:04:56.672687  353369 main.go:141] libmachine: Using API Version  1
	I1128 03:04:56.672711  353369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:04:56.673064  353369 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:04:56.673237  353369 main.go:141] libmachine: (multinode-112998) Calling .GetState
	I1128 03:04:56.674755  353369 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:04:56.675031  353369 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 03:04:56.675052  353369 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 03:04:56.675068  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:04:56.677817  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:56.678213  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:04:56.678241  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:04:56.678397  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:04:56.678586  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:04:56.678721  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:04:56.678854  353369 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:04:56.777987  353369 command_runner.go:130] > apiVersion: v1
	I1128 03:04:56.778012  353369 command_runner.go:130] > data:
	I1128 03:04:56.778019  353369 command_runner.go:130] >   Corefile: |
	I1128 03:04:56.778025  353369 command_runner.go:130] >     .:53 {
	I1128 03:04:56.778030  353369 command_runner.go:130] >         errors
	I1128 03:04:56.778038  353369 command_runner.go:130] >         health {
	I1128 03:04:56.778046  353369 command_runner.go:130] >            lameduck 5s
	I1128 03:04:56.778051  353369 command_runner.go:130] >         }
	I1128 03:04:56.778079  353369 command_runner.go:130] >         ready
	I1128 03:04:56.778091  353369 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1128 03:04:56.778098  353369 command_runner.go:130] >            pods insecure
	I1128 03:04:56.778105  353369 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1128 03:04:56.778113  353369 command_runner.go:130] >            ttl 30
	I1128 03:04:56.778123  353369 command_runner.go:130] >         }
	I1128 03:04:56.778131  353369 command_runner.go:130] >         prometheus :9153
	I1128 03:04:56.778145  353369 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1128 03:04:56.778151  353369 command_runner.go:130] >            max_concurrent 1000
	I1128 03:04:56.778155  353369 command_runner.go:130] >         }
	I1128 03:04:56.778159  353369 command_runner.go:130] >         cache 30
	I1128 03:04:56.778163  353369 command_runner.go:130] >         loop
	I1128 03:04:56.778169  353369 command_runner.go:130] >         reload
	I1128 03:04:56.778177  353369 command_runner.go:130] >         loadbalance
	I1128 03:04:56.778181  353369 command_runner.go:130] >     }
	I1128 03:04:56.778187  353369 command_runner.go:130] > kind: ConfigMap
	I1128 03:04:56.778195  353369 command_runner.go:130] > metadata:
	I1128 03:04:56.778213  353369 command_runner.go:130] >   creationTimestamp: "2023-11-28T03:04:44Z"
	I1128 03:04:56.778226  353369 command_runner.go:130] >   name: coredns
	I1128 03:04:56.778230  353369 command_runner.go:130] >   namespace: kube-system
	I1128 03:04:56.778238  353369 command_runner.go:130] >   resourceVersion: "265"
	I1128 03:04:56.778247  353369 command_runner.go:130] >   uid: 495740b6-25c3-48ab-96b3-4d2ad854ec0c
	I1128 03:04:56.778390  353369 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 03:04:56.778755  353369 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:04:56.779125  353369 kapi.go:59] client config for multinode-112998: &rest.Config{Host:"https://192.168.39.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 03:04:56.779372  353369 node_ready.go:35] waiting up to 6m0s for node "multinode-112998" to be "Ready" ...
	I1128 03:04:56.779477  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:04:56.779487  353369 round_trippers.go:469] Request Headers:
	I1128 03:04:56.779494  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:04:56.779503  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:04:56.781489  353369 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:04:56.781507  353369 round_trippers.go:577] Response Headers:
	I1128 03:04:56.781516  353369 round_trippers.go:580]     Audit-Id: 43aaeafd-e116-4892-a4a5-c0c29f825d4e
	I1128 03:04:56.781524  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:04:56.781532  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:04:56.781540  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:04:56.781549  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:04:56.781555  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:04:56 GMT
	I1128 03:04:56.781795  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"330","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:0
4:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 5988 chars]
	I1128 03:04:56.782689  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:04:56.782712  353369 round_trippers.go:469] Request Headers:
	I1128 03:04:56.782722  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:04:56.782736  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:04:56.785146  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:04:56.785169  353369 round_trippers.go:577] Response Headers:
	I1128 03:04:56.785179  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:04:56.785193  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:04:56.785201  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:04:56 GMT
	I1128 03:04:56.785209  353369 round_trippers.go:580]     Audit-Id: 41b9b2c6-49f0-44e4-be64-848654cd450e
	I1128 03:04:56.785221  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:04:56.785233  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:04:56.785572  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"330","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:0
4:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 5988 chars]
	I1128 03:04:56.810353  353369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 03:04:56.874190  353369 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 03:04:57.287094  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:04:57.287124  353369 round_trippers.go:469] Request Headers:
	I1128 03:04:57.287136  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:04:57.287146  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:04:57.297939  353369 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I1128 03:04:57.297966  353369 round_trippers.go:577] Response Headers:
	I1128 03:04:57.297975  353369 round_trippers.go:580]     Audit-Id: 020b40c9-f67d-41d2-b584-bd4eab33f8a8
	I1128 03:04:57.297984  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:04:57.297997  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:04:57.298004  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:04:57.298011  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:04:57.298020  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:04:57 GMT
	I1128 03:04:57.298194  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"374","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1128 03:04:57.680694  353369 command_runner.go:130] > configmap/coredns replaced
	I1128 03:04:57.684897  353369 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 03:04:57.786184  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:04:57.786210  353369 round_trippers.go:469] Request Headers:
	I1128 03:04:57.786223  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:04:57.786233  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:04:57.790510  353369 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:04:57.790532  353369 round_trippers.go:577] Response Headers:
	I1128 03:04:57.790543  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:04:57.790552  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:04:57.790558  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:04:57.790563  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:04:57.790568  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:04:57 GMT
	I1128 03:04:57.790574  353369 round_trippers.go:580]     Audit-Id: 7e6c3bda-d5c2-4968-a8b4-b77d699949d0
	I1128 03:04:57.790812  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"374","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1128 03:04:57.839376  353369 command_runner.go:130] > serviceaccount/storage-provisioner created
	I1128 03:04:57.839417  353369 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I1128 03:04:57.839430  353369 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1128 03:04:57.839446  353369 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I1128 03:04:57.839455  353369 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I1128 03:04:57.839463  353369 command_runner.go:130] > pod/storage-provisioner created
	I1128 03:04:57.839496  353369 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I1128 03:04:57.839549  353369 main.go:141] libmachine: Making call to close driver server
	I1128 03:04:57.839569  353369 main.go:141] libmachine: (multinode-112998) Calling .Close
	I1128 03:04:57.839894  353369 main.go:141] libmachine: (multinode-112998) DBG | Closing plugin on server side
	I1128 03:04:57.839965  353369 main.go:141] libmachine: Successfully made call to close driver server
	I1128 03:04:57.839981  353369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 03:04:57.839992  353369 main.go:141] libmachine: Making call to close driver server
	I1128 03:04:57.840001  353369 main.go:141] libmachine: (multinode-112998) Calling .Close
	I1128 03:04:57.840228  353369 main.go:141] libmachine: Successfully made call to close driver server
	I1128 03:04:57.840244  353369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 03:04:57.840376  353369 round_trippers.go:463] GET https://192.168.39.73:8443/apis/storage.k8s.io/v1/storageclasses
	I1128 03:04:57.840392  353369 round_trippers.go:469] Request Headers:
	I1128 03:04:57.840404  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:04:57.840421  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:04:57.841335  353369 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.030935914s)
	I1128 03:04:57.841415  353369 main.go:141] libmachine: Making call to close driver server
	I1128 03:04:57.841431  353369 main.go:141] libmachine: (multinode-112998) Calling .Close
	I1128 03:04:57.841700  353369 main.go:141] libmachine: Successfully made call to close driver server
	I1128 03:04:57.841718  353369 main.go:141] libmachine: (multinode-112998) DBG | Closing plugin on server side
	I1128 03:04:57.841723  353369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 03:04:57.841742  353369 main.go:141] libmachine: Making call to close driver server
	I1128 03:04:57.841752  353369 main.go:141] libmachine: (multinode-112998) Calling .Close
	I1128 03:04:57.841990  353369 main.go:141] libmachine: Successfully made call to close driver server
	I1128 03:04:57.842014  353369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 03:04:57.844479  353369 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:04:57.844499  353369 round_trippers.go:577] Response Headers:
	I1128 03:04:57.844510  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:04:57.844517  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:04:57.844530  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:04:57.844539  353369 round_trippers.go:580]     Content-Length: 1273
	I1128 03:04:57.844544  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:04:57 GMT
	I1128 03:04:57.844551  353369 round_trippers.go:580]     Audit-Id: 194d1020-ddbb-4446-8c6c-227e00ca4545
	I1128 03:04:57.844557  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:04:57.844590  353369 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"405"},"items":[{"metadata":{"name":"standard","uid":"6c4cd351-a16e-45f1-b9db-29f47e234511","resourceVersion":"397","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I1128 03:04:57.845030  353369 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"6c4cd351-a16e-45f1-b9db-29f47e234511","resourceVersion":"397","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1128 03:04:57.845090  353369 round_trippers.go:463] PUT https://192.168.39.73:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1128 03:04:57.845098  353369 round_trippers.go:469] Request Headers:
	I1128 03:04:57.845105  353369 round_trippers.go:473]     Content-Type: application/json
	I1128 03:04:57.845111  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:04:57.845119  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:04:57.856636  353369 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I1128 03:04:57.856669  353369 round_trippers.go:577] Response Headers:
	I1128 03:04:57.856676  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:04:57.856682  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:04:57.856687  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:04:57.856695  353369 round_trippers.go:580]     Content-Length: 1220
	I1128 03:04:57.856701  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:04:57 GMT
	I1128 03:04:57.856706  353369 round_trippers.go:580]     Audit-Id: f6fb8e45-5a89-40b7-ab11-d4be05b0b9fc
	I1128 03:04:57.856711  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:04:57.856763  353369 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"6c4cd351-a16e-45f1-b9db-29f47e234511","resourceVersion":"397","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1128 03:04:57.856973  353369 main.go:141] libmachine: Making call to close driver server
	I1128 03:04:57.856995  353369 main.go:141] libmachine: (multinode-112998) Calling .Close
	I1128 03:04:57.857315  353369 main.go:141] libmachine: Successfully made call to close driver server
	I1128 03:04:57.857336  353369 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 03:04:57.857355  353369 main.go:141] libmachine: (multinode-112998) DBG | Closing plugin on server side
	I1128 03:04:57.859127  353369 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1128 03:04:57.860444  353369 addons.go:502] enable addons completed in 1.249569279s: enabled=[storage-provisioner default-storageclass]
	I1128 03:04:58.286522  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:04:58.286551  353369 round_trippers.go:469] Request Headers:
	I1128 03:04:58.286573  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:04:58.286583  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:04:58.289301  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:04:58.289320  353369 round_trippers.go:577] Response Headers:
	I1128 03:04:58.289327  353369 round_trippers.go:580]     Audit-Id: 96c91e31-da7a-4c62-890c-dec73520d666
	I1128 03:04:58.289337  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:04:58.289355  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:04:58.289367  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:04:58.289376  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:04:58.289384  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:04:58 GMT
	I1128 03:04:58.289598  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"374","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1128 03:04:58.786232  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:04:58.786261  353369 round_trippers.go:469] Request Headers:
	I1128 03:04:58.786273  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:04:58.786281  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:04:58.788977  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:04:58.789007  353369 round_trippers.go:577] Response Headers:
	I1128 03:04:58.789017  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:04:58.789026  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:04:58.789031  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:04:58 GMT
	I1128 03:04:58.789038  353369 round_trippers.go:580]     Audit-Id: 72a3c44a-c3d8-4bb2-a36a-06a18e4b5849
	I1128 03:04:58.789046  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:04:58.789051  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:04:58.789409  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"374","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1128 03:04:58.789753  353369 node_ready.go:58] node "multinode-112998" has status "Ready":"False"
	I1128 03:04:59.287166  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:04:59.287195  353369 round_trippers.go:469] Request Headers:
	I1128 03:04:59.287208  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:04:59.287219  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:04:59.291289  353369 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:04:59.291320  353369 round_trippers.go:577] Response Headers:
	I1128 03:04:59.291330  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:04:59.291337  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:04:59.291348  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:04:59.291362  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:04:59 GMT
	I1128 03:04:59.291385  353369 round_trippers.go:580]     Audit-Id: cef0076e-dab7-4d11-a8a0-703ba1597422
	I1128 03:04:59.291401  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:04:59.291530  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"374","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1128 03:04:59.786159  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:04:59.786192  353369 round_trippers.go:469] Request Headers:
	I1128 03:04:59.786200  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:04:59.786206  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:04:59.789270  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:04:59.789290  353369 round_trippers.go:577] Response Headers:
	I1128 03:04:59.789297  353369 round_trippers.go:580]     Audit-Id: 3d6f14e1-a3a1-4612-8cf6-7fc829409e05
	I1128 03:04:59.789305  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:04:59.789314  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:04:59.789325  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:04:59.789332  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:04:59.789341  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:04:59 GMT
	I1128 03:04:59.789444  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"374","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1128 03:05:00.287125  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:00.287156  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:00.287164  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:00.287170  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:00.290027  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:00.290048  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:00.290055  353369 round_trippers.go:580]     Audit-Id: d81174ac-6944-459d-8f20-fb74e9e11248
	I1128 03:05:00.290061  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:00.290069  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:00.290074  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:00.290080  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:00.290087  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:00 GMT
	I1128 03:05:00.290317  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"374","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1128 03:05:00.787018  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:00.787049  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:00.787058  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:00.787064  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:00.790512  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:00.790543  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:00.790562  353369 round_trippers.go:580]     Audit-Id: 3314b219-83ca-4ab8-ac0f-0147e6b3136c
	I1128 03:05:00.790570  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:00.790579  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:00.790586  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:00.790594  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:00.790603  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:00 GMT
	I1128 03:05:00.790743  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"374","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1128 03:05:00.791131  353369 node_ready.go:58] node "multinode-112998" has status "Ready":"False"
	I1128 03:05:01.286331  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:01.286356  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:01.286366  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:01.286375  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:01.289518  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:01.289541  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:01.289548  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:01.289554  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:01.289563  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:01.289577  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:01.289583  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:01 GMT
	I1128 03:05:01.289588  353369 round_trippers.go:580]     Audit-Id: 616a53e1-691d-4bf4-b589-158e52b4e973
	I1128 03:05:01.289771  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"374","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1128 03:05:01.787023  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:01.787054  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:01.787062  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:01.787068  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:01.789794  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:01.789814  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:01.789821  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:01.789827  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:01 GMT
	I1128 03:05:01.789832  353369 round_trippers.go:580]     Audit-Id: 54b7d59a-1241-4b82-a3d3-3f49cc32b1d8
	I1128 03:05:01.789837  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:01.789842  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:01.789846  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:01.790404  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"374","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6092 chars]
	I1128 03:05:02.287191  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:02.287225  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:02.287239  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:02.287249  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:02.290283  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:02.290303  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:02.290316  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:02.290327  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:02.290335  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:02.290343  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:02.290351  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:02 GMT
	I1128 03:05:02.290359  353369 round_trippers.go:580]     Audit-Id: 6827c7c6-3ccc-47e8-92ec-835fc0714439
	I1128 03:05:02.290552  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:02.290878  353369 node_ready.go:49] node "multinode-112998" has status "Ready":"True"
	I1128 03:05:02.290895  353369 node_ready.go:38] duration metric: took 5.511493894s waiting for node "multinode-112998" to be "Ready" ...
	I1128 03:05:02.290904  353369 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 03:05:02.290961  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods
	I1128 03:05:02.290974  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:02.290981  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:02.290987  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:02.295339  353369 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:05:02.295357  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:02.295363  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:02.295369  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:02.295376  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:02 GMT
	I1128 03:05:02.295381  353369 round_trippers.go:580]     Audit-Id: e9273542-64a3-4cc1-bb40-07d0b70355c6
	I1128 03:05:02.295387  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:02.295395  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:02.296383  353369 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"429"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"429","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54553 chars]
	I1128 03:05:02.299474  353369 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:02.299547  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:05:02.299559  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:02.299566  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:02.299572  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:02.301582  353369 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:05:02.301597  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:02.301603  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:02.301608  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:02.301613  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:02.301618  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:02.301626  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:02 GMT
	I1128 03:05:02.301635  353369 round_trippers.go:580]     Audit-Id: 307bd28a-de41-4611-8c5a-6c5caa2306cb
	I1128 03:05:02.301770  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"429","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1128 03:05:02.302157  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:02.302170  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:02.302176  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:02.302182  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:02.304178  353369 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:05:02.304197  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:02.304207  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:02.304216  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:02.304225  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:02.304231  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:02.304239  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:02 GMT
	I1128 03:05:02.304246  353369 round_trippers.go:580]     Audit-Id: e6d36f55-6f2b-43a1-ae65-e06a0eebb90d
	I1128 03:05:02.304405  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:02.304755  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:05:02.304768  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:02.304774  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:02.304780  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:02.306695  353369 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:05:02.306711  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:02.306724  353369 round_trippers.go:580]     Audit-Id: a2d243f1-5f18-4c94-b721-ec3e9707a8f1
	I1128 03:05:02.306733  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:02.306742  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:02.306749  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:02.306758  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:02.306763  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:02 GMT
	I1128 03:05:02.307079  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"429","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1128 03:05:02.307448  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:02.307461  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:02.307468  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:02.307473  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:02.313127  353369 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1128 03:05:02.313151  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:02.313161  353369 round_trippers.go:580]     Audit-Id: 24b7494e-6b15-47d9-831a-29564b66f288
	I1128 03:05:02.313169  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:02.313175  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:02.313184  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:02.313190  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:02.313195  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:02 GMT
	I1128 03:05:02.313339  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:02.814234  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:05:02.814265  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:02.814278  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:02.814291  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:02.819724  353369 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1128 03:05:02.819748  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:02.819756  353369 round_trippers.go:580]     Audit-Id: 667abd27-4af0-4f0b-ac0d-4b7b497624e6
	I1128 03:05:02.819761  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:02.819768  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:02.819777  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:02.819786  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:02.819795  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:02 GMT
	I1128 03:05:02.819924  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"429","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1128 03:05:02.820540  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:02.820559  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:02.820571  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:02.820581  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:02.822652  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:02.822675  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:02.822686  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:02.822695  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:02 GMT
	I1128 03:05:02.822703  353369 round_trippers.go:580]     Audit-Id: 39354b43-3ce3-4cb5-a846-9ccb34e6ac43
	I1128 03:05:02.822709  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:02.822715  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:02.822720  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:02.822860  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:03.314599  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:05:03.314634  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:03.314646  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:03.314657  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:03.317671  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:03.317692  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:03.317698  353369 round_trippers.go:580]     Audit-Id: 9b21e748-ae38-4d08-8640-c8f9450db066
	I1128 03:05:03.317704  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:03.317709  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:03.317714  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:03.317719  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:03.317724  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:03 GMT
	I1128 03:05:03.317958  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"429","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6151 chars]
	I1128 03:05:03.318617  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:03.318637  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:03.318645  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:03.318652  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:03.320646  353369 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:05:03.320661  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:03.320667  353369 round_trippers.go:580]     Audit-Id: 8a8dfa63-8754-4045-92a3-7a475f1caa4b
	I1128 03:05:03.320676  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:03.320681  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:03.320686  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:03.320691  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:03.320696  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:03 GMT
	I1128 03:05:03.321170  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:03.813823  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:05:03.813853  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:03.813860  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:03.813867  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:03.816862  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:03.816901  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:03.816911  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:03 GMT
	I1128 03:05:03.816921  353369 round_trippers.go:580]     Audit-Id: 61285c45-ec95-4ce9-814e-f9728ebc907f
	I1128 03:05:03.816930  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:03.816940  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:03.816954  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:03.816973  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:03.817248  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"443","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1128 03:05:03.817707  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:03.817725  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:03.817735  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:03.817744  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:03.819943  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:03.819961  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:03.819971  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:03.819979  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:03 GMT
	I1128 03:05:03.819986  353369 round_trippers.go:580]     Audit-Id: 692f48e4-117e-4696-adc4-434c3a14c72d
	I1128 03:05:03.819998  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:03.820006  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:03.820022  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:03.820453  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:03.820753  353369 pod_ready.go:92] pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace has status "Ready":"True"
	I1128 03:05:03.820773  353369 pod_ready.go:81] duration metric: took 1.521272769s waiting for pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:03.820781  353369 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:03.820824  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-112998
	I1128 03:05:03.820832  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:03.820857  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:03.820871  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:03.824010  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:03.824029  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:03.824035  353369 round_trippers.go:580]     Audit-Id: ac36cef1-f7bd-4d9d-ae42-47570ea00ee4
	I1128 03:05:03.824041  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:03.824051  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:03.824059  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:03.824064  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:03.824069  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:03 GMT
	I1128 03:05:03.824219  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-112998","namespace":"kube-system","uid":"d09c5f66-0756-4402-ae0e-3b10c34e059c","resourceVersion":"408","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.73:2379","kubernetes.io/config.hash":"424bc6684b5cae600504832fd6cb287f","kubernetes.io/config.mirror":"424bc6684b5cae600504832fd6cb287f","kubernetes.io/config.seen":"2023-11-28T03:04:44.384307907Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1128 03:05:03.824565  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:03.824581  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:03.824591  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:03.824599  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:03.826367  353369 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:05:03.826385  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:03.826395  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:03.826402  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:03.826408  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:03.826413  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:03.826420  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:03 GMT
	I1128 03:05:03.826426  353369 round_trippers.go:580]     Audit-Id: 96fdc33e-b98a-4f7f-9723-76fb238b1c39
	I1128 03:05:03.826692  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:03.827074  353369 pod_ready.go:92] pod "etcd-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:05:03.827097  353369 pod_ready.go:81] duration metric: took 6.309634ms waiting for pod "etcd-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:03.827109  353369 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:03.827163  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:05:03.827168  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:03.827174  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:03.827180  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:03.831370  353369 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:05:03.831389  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:03.831396  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:03.831401  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:03 GMT
	I1128 03:05:03.831406  353369 round_trippers.go:580]     Audit-Id: 3330f372-bb58-43d4-b667-fa3a1fcc119c
	I1128 03:05:03.831411  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:03.831416  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:03.831421  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:03.831786  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"354","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7605 chars]
	I1128 03:05:03.832359  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:03.832383  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:03.832394  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:03.832405  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:03.837218  353369 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:05:03.837237  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:03.837247  353369 round_trippers.go:580]     Audit-Id: ee6433ec-ae4b-47d5-a309-eca5198f7cc2
	I1128 03:05:03.837256  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:03.837263  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:03.837269  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:03.837274  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:03.837279  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:03 GMT
	I1128 03:05:03.837444  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:03.837929  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:05:03.837949  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:03.837960  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:03.837969  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:03.843291  353369 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1128 03:05:03.843307  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:03.843316  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:03 GMT
	I1128 03:05:03.843325  353369 round_trippers.go:580]     Audit-Id: 405c9593-bce9-4743-82e2-1e7bb9fff478
	I1128 03:05:03.843333  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:03.843341  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:03.843347  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:03.843352  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:03.843605  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"354","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7605 chars]
	I1128 03:05:03.887252  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:03.887291  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:03.887299  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:03.887305  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:03.889758  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:03.889786  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:03.889796  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:03.889801  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:03.889807  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:03.889812  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:03.889817  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:03 GMT
	I1128 03:05:03.889825  353369 round_trippers.go:580]     Audit-Id: c29bab0f-c871-43a1-8ec8-43c1426770c2
	I1128 03:05:03.890502  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:04.391339  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:05:04.391393  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:04.391402  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:04.391408  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:04.394323  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:04.394355  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:04.394366  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:04 GMT
	I1128 03:05:04.394374  353369 round_trippers.go:580]     Audit-Id: 134ce9a5-cdb1-4a72-b374-6f164d070c1f
	I1128 03:05:04.394382  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:04.394390  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:04.394398  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:04.394406  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:04.394825  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"354","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7605 chars]
	I1128 03:05:04.395298  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:04.395312  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:04.395319  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:04.395327  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:04.397575  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:04.397595  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:04.397604  353369 round_trippers.go:580]     Audit-Id: cb7ddfb3-367d-4d7f-81f6-153b03484ec8
	I1128 03:05:04.397612  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:04.397620  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:04.397631  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:04.397641  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:04.397651  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:04 GMT
	I1128 03:05:04.398027  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:04.891492  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:05:04.891522  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:04.891530  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:04.891536  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:04.900377  353369 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1128 03:05:04.900413  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:04.900425  353369 round_trippers.go:580]     Audit-Id: df63c096-8ae3-4754-8d0d-d8ddb6eccd60
	I1128 03:05:04.900433  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:04.900440  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:04.900445  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:04.900450  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:04.900455  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:04 GMT
	I1128 03:05:04.900620  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"449","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1128 03:05:04.901096  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:04.901116  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:04.901124  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:04.901130  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:04.904214  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:04.904240  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:04.904250  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:04.904259  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:04.904267  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:04 GMT
	I1128 03:05:04.904279  353369 round_trippers.go:580]     Audit-Id: 5946a2bd-5774-4adb-95a2-2f2fc4292a47
	I1128 03:05:04.904291  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:04.904300  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:04.904424  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:04.904835  353369 pod_ready.go:92] pod "kube-apiserver-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:05:04.904864  353369 pod_ready.go:81] duration metric: took 1.077747013s waiting for pod "kube-apiserver-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:04.904894  353369 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:04.904986  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-112998
	I1128 03:05:04.904998  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:04.905009  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:04.905019  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:04.907562  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:04.907583  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:04.907593  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:04 GMT
	I1128 03:05:04.907602  353369 round_trippers.go:580]     Audit-Id: 5397ea60-5edc-4d24-bf5d-3ffa37f7c657
	I1128 03:05:04.907610  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:04.907619  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:04.907627  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:04.907640  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:04.908073  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-112998","namespace":"kube-system","uid":"9c108920-a3e5-4377-96a3-97a4538555a0","resourceVersion":"450","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8aad7d6fb2125381c02e5fd8434005a3","kubernetes.io/config.mirror":"8aad7d6fb2125381c02e5fd8434005a3","kubernetes.io/config.seen":"2023-11-28T03:04:44.384314206Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1128 03:05:05.087873  353369 request.go:629] Waited for 179.36171ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:05.087951  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:05.087956  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:05.087964  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:05.087974  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:05.090557  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:05.090580  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:05.090588  353369 round_trippers.go:580]     Audit-Id: c2c5b850-f0c2-46f3-9e3e-f210d9f789e4
	I1128 03:05:05.090593  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:05.090601  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:05.090607  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:05.090612  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:05.090620  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:05 GMT
	I1128 03:05:05.091017  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:05.091410  353369 pod_ready.go:92] pod "kube-controller-manager-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:05:05.091432  353369 pod_ready.go:81] duration metric: took 186.525068ms waiting for pod "kube-controller-manager-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:05.091443  353369 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bmr6b" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:05.287941  353369 request.go:629] Waited for 196.408634ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmr6b
	I1128 03:05:05.288024  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmr6b
	I1128 03:05:05.288042  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:05.288052  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:05.288065  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:05.291844  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:05.291875  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:05.291885  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:05.291894  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:05 GMT
	I1128 03:05:05.291903  353369 round_trippers.go:580]     Audit-Id: 534da8bd-d790-4fc0-8ce7-e95c387e2d51
	I1128 03:05:05.291910  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:05.291918  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:05.291926  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:05.292737  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bmr6b","generateName":"kube-proxy-","namespace":"kube-system","uid":"0d9b86f2-025d-424d-a66f-ad3255685aca","resourceVersion":"413","creationTimestamp":"2023-11-28T03:04:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1128 03:05:05.487173  353369 request.go:629] Waited for 193.947465ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:05.487241  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:05.487246  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:05.487264  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:05.487277  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:05.490262  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:05.490285  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:05.490292  353369 round_trippers.go:580]     Audit-Id: 113f25cc-5e8f-4940-b882-75ac2d982976
	I1128 03:05:05.490297  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:05.490310  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:05.490317  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:05.490330  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:05.490342  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:05 GMT
	I1128 03:05:05.490596  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:05.491023  353369 pod_ready.go:92] pod "kube-proxy-bmr6b" in "kube-system" namespace has status "Ready":"True"
	I1128 03:05:05.491046  353369 pod_ready.go:81] duration metric: took 399.597187ms waiting for pod "kube-proxy-bmr6b" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:05.491058  353369 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:05.687504  353369 request.go:629] Waited for 196.369534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-112998
	I1128 03:05:05.687574  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-112998
	I1128 03:05:05.687579  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:05.687587  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:05.687594  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:05.691984  353369 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:05:05.692008  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:05.692015  353369 round_trippers.go:580]     Audit-Id: 289a273d-afd3-4ec9-a304-35dc65cccdf1
	I1128 03:05:05.692021  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:05.692026  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:05.692031  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:05.692036  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:05.692042  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:05 GMT
	I1128 03:05:05.692706  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-112998","namespace":"kube-system","uid":"b32dbcd4-76a8-4b87-b7d8-701f78a8285f","resourceVersion":"448","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"49372038efccb5b42d91203468562dfb","kubernetes.io/config.mirror":"49372038efccb5b42d91203468562dfb","kubernetes.io/config.seen":"2023-11-28T03:04:44.384315431Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1128 03:05:05.887593  353369 request.go:629] Waited for 194.405924ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:05.887658  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:05.887663  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:05.887671  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:05.887683  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:05.890352  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:05.890378  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:05.890385  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:05.890391  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:05 GMT
	I1128 03:05:05.890396  353369 round_trippers.go:580]     Audit-Id: 7e7e47c9-111e-4413-bf59-dd5e611df675
	I1128 03:05:05.890402  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:05.890407  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:05.890419  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:05.890614  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:05.891191  353369 pod_ready.go:92] pod "kube-scheduler-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:05:05.891224  353369 pod_ready.go:81] duration metric: took 400.156216ms waiting for pod "kube-scheduler-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:05.891243  353369 pod_ready.go:38] duration metric: took 3.600325369s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 03:05:05.891270  353369 api_server.go:52] waiting for apiserver process to appear ...
	I1128 03:05:05.891386  353369 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 03:05:05.905181  353369 command_runner.go:130] > 1079
	I1128 03:05:05.905325  353369 api_server.go:72] duration metric: took 9.260178173s to wait for apiserver process to appear ...
	I1128 03:05:05.905350  353369 api_server.go:88] waiting for apiserver healthz status ...
	I1128 03:05:05.905370  353369 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I1128 03:05:05.910305  353369 api_server.go:279] https://192.168.39.73:8443/healthz returned 200:
	ok
	I1128 03:05:05.910365  353369 round_trippers.go:463] GET https://192.168.39.73:8443/version
	I1128 03:05:05.910376  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:05.910384  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:05.910390  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:05.911463  353369 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:05:05.911485  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:05.911495  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:05 GMT
	I1128 03:05:05.911501  353369 round_trippers.go:580]     Audit-Id: 1d85faf9-9de5-4d0c-b56a-55b5938eb986
	I1128 03:05:05.911507  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:05.911515  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:05.911526  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:05.911539  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:05.911550  353369 round_trippers.go:580]     Content-Length: 264
	I1128 03:05:05.911572  353369 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1128 03:05:05.911697  353369 api_server.go:141] control plane version: v1.28.4
	I1128 03:05:05.911716  353369 api_server.go:131] duration metric: took 6.360151ms to wait for apiserver health ...
	I1128 03:05:05.911724  353369 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 03:05:06.088221  353369 request.go:629] Waited for 176.412989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods
	I1128 03:05:06.088317  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods
	I1128 03:05:06.088325  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:06.088337  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:06.088351  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:06.092100  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:06.092126  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:06.092133  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:06.092139  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:06 GMT
	I1128 03:05:06.092144  353369 round_trippers.go:580]     Audit-Id: 96abcf27-3e7b-4e30-8564-d39749eb3627
	I1128 03:05:06.092149  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:06.092154  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:06.092159  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:06.093537  353369 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"452"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"443","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I1128 03:05:06.095240  353369 system_pods.go:59] 8 kube-system pods found
	I1128 03:05:06.095264  353369 system_pods.go:61] "coredns-5dd5756b68-sd64m" [0d5cae9f-6647-42f9-a8e7-1f14dc9fa422] Running
	I1128 03:05:06.095269  353369 system_pods.go:61] "etcd-multinode-112998" [d09c5f66-0756-4402-ae0e-3b10c34e059c] Running
	I1128 03:05:06.095272  353369 system_pods.go:61] "kindnet-5pfcd" [370f4bc7-f3dd-456e-b67a-fff569e42ac1] Running
	I1128 03:05:06.095276  353369 system_pods.go:61] "kube-apiserver-multinode-112998" [2191c8f0-3de1-4415-9bc9-b5dc50008609] Running
	I1128 03:05:06.095284  353369 system_pods.go:61] "kube-controller-manager-multinode-112998" [9c108920-a3e5-4377-96a3-97a4538555a0] Running
	I1128 03:05:06.095291  353369 system_pods.go:61] "kube-proxy-bmr6b" [0d9b86f2-025d-424d-a66f-ad3255685aca] Running
	I1128 03:05:06.095300  353369 system_pods.go:61] "kube-scheduler-multinode-112998" [b32dbcd4-76a8-4b87-b7d8-701f78a8285f] Running
	I1128 03:05:06.095303  353369 system_pods.go:61] "storage-provisioner" [80d85aa0-5ee8-48db-a570-fdde6138e079] Running
	I1128 03:05:06.095308  353369 system_pods.go:74] duration metric: took 183.57974ms to wait for pod list to return data ...
	I1128 03:05:06.095316  353369 default_sa.go:34] waiting for default service account to be created ...
	I1128 03:05:06.287776  353369 request.go:629] Waited for 192.370576ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/default/serviceaccounts
	I1128 03:05:06.287839  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/default/serviceaccounts
	I1128 03:05:06.287844  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:06.287851  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:06.287860  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:06.292076  353369 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:05:06.292109  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:06.292121  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:06 GMT
	I1128 03:05:06.292130  353369 round_trippers.go:580]     Audit-Id: 64dd157a-e7f6-4444-a600-7a8929c33943
	I1128 03:05:06.292138  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:06.292146  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:06.292153  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:06.292161  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:06.292169  353369 round_trippers.go:580]     Content-Length: 261
	I1128 03:05:06.292205  353369 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"452"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5c736962-0501-45a3-be1a-3066e5ff4f01","resourceVersion":"331","creationTimestamp":"2023-11-28T03:04:56Z"}}]}
	I1128 03:05:06.292475  353369 default_sa.go:45] found service account: "default"
	I1128 03:05:06.292496  353369 default_sa.go:55] duration metric: took 197.171095ms for default service account to be created ...
	I1128 03:05:06.292505  353369 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 03:05:06.488017  353369 request.go:629] Waited for 195.442284ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods
	I1128 03:05:06.488103  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods
	I1128 03:05:06.488111  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:06.488125  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:06.488133  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:06.491659  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:06.491688  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:06.491706  353369 round_trippers.go:580]     Audit-Id: 0c9a955c-d53f-40cc-9465-1e705d85192d
	I1128 03:05:06.491716  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:06.491724  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:06.491733  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:06.491741  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:06.491749  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:06 GMT
	I1128 03:05:06.492932  353369 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"452"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"443","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53956 chars]
	I1128 03:05:06.494691  353369 system_pods.go:86] 8 kube-system pods found
	I1128 03:05:06.494716  353369 system_pods.go:89] "coredns-5dd5756b68-sd64m" [0d5cae9f-6647-42f9-a8e7-1f14dc9fa422] Running
	I1128 03:05:06.494721  353369 system_pods.go:89] "etcd-multinode-112998" [d09c5f66-0756-4402-ae0e-3b10c34e059c] Running
	I1128 03:05:06.494725  353369 system_pods.go:89] "kindnet-5pfcd" [370f4bc7-f3dd-456e-b67a-fff569e42ac1] Running
	I1128 03:05:06.494729  353369 system_pods.go:89] "kube-apiserver-multinode-112998" [2191c8f0-3de1-4415-9bc9-b5dc50008609] Running
	I1128 03:05:06.494734  353369 system_pods.go:89] "kube-controller-manager-multinode-112998" [9c108920-a3e5-4377-96a3-97a4538555a0] Running
	I1128 03:05:06.494741  353369 system_pods.go:89] "kube-proxy-bmr6b" [0d9b86f2-025d-424d-a66f-ad3255685aca] Running
	I1128 03:05:06.494747  353369 system_pods.go:89] "kube-scheduler-multinode-112998" [b32dbcd4-76a8-4b87-b7d8-701f78a8285f] Running
	I1128 03:05:06.494757  353369 system_pods.go:89] "storage-provisioner" [80d85aa0-5ee8-48db-a570-fdde6138e079] Running
	I1128 03:05:06.494766  353369 system_pods.go:126] duration metric: took 202.255444ms to wait for k8s-apps to be running ...
	I1128 03:05:06.494779  353369 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 03:05:06.494830  353369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 03:05:06.509557  353369 system_svc.go:56] duration metric: took 14.768881ms WaitForService to wait for kubelet.
	I1128 03:05:06.509585  353369 kubeadm.go:581] duration metric: took 9.864444814s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 03:05:06.509605  353369 node_conditions.go:102] verifying NodePressure condition ...
	I1128 03:05:06.687645  353369 request.go:629] Waited for 177.958424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes
	I1128 03:05:06.687715  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes
	I1128 03:05:06.687720  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:06.687728  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:06.687735  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:06.690431  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:06.690459  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:06.690477  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:06.690485  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:06.690490  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:06.690495  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:06.690500  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:06 GMT
	I1128 03:05:06.690505  353369 round_trippers.go:580]     Audit-Id: d1aa6e49-85fc-4206-8510-2fafda427e89
	I1128 03:05:06.690736  353369 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"452"},"items":[{"metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5951 chars]
	I1128 03:05:06.691238  353369 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 03:05:06.691268  353369 node_conditions.go:123] node cpu capacity is 2
	I1128 03:05:06.691281  353369 node_conditions.go:105] duration metric: took 181.671109ms to run NodePressure ...
	I1128 03:05:06.691293  353369 start.go:228] waiting for startup goroutines ...
	I1128 03:05:06.691303  353369 start.go:233] waiting for cluster config update ...
	I1128 03:05:06.691312  353369 start.go:242] writing updated cluster config ...
	I1128 03:05:06.693657  353369 out.go:177] 
	I1128 03:05:06.695271  353369 config.go:182] Loaded profile config "multinode-112998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:05:06.695372  353369 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/config.json ...
	I1128 03:05:06.697027  353369 out.go:177] * Starting worker node multinode-112998-m02 in cluster multinode-112998
	I1128 03:05:06.698370  353369 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 03:05:06.698400  353369 cache.go:56] Caching tarball of preloaded images
	I1128 03:05:06.698528  353369 preload.go:174] Found /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 03:05:06.698541  353369 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 03:05:06.698641  353369 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/config.json ...
	I1128 03:05:06.698850  353369 start.go:365] acquiring machines lock for multinode-112998-m02: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 03:05:06.698901  353369 start.go:369] acquired machines lock for "multinode-112998-m02" in 29.913µs
	I1128 03:05:06.698919  353369 start.go:93] Provisioning new machine with config: &{Name:multinode-112998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:multinode-112998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:t
rue ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1128 03:05:06.699030  353369 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1128 03:05:06.701610  353369 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1128 03:05:06.701695  353369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:05:06.701732  353369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:05:06.715965  353369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40055
	I1128 03:05:06.716419  353369 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:05:06.716948  353369 main.go:141] libmachine: Using API Version  1
	I1128 03:05:06.716973  353369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:05:06.717279  353369 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:05:06.717477  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetMachineName
	I1128 03:05:06.717597  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:05:06.717781  353369 start.go:159] libmachine.API.Create for "multinode-112998" (driver="kvm2")
	I1128 03:05:06.717812  353369 client.go:168] LocalClient.Create starting
	I1128 03:05:06.717886  353369 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem
	I1128 03:05:06.717927  353369 main.go:141] libmachine: Decoding PEM data...
	I1128 03:05:06.717952  353369 main.go:141] libmachine: Parsing certificate...
	I1128 03:05:06.718014  353369 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem
	I1128 03:05:06.718041  353369 main.go:141] libmachine: Decoding PEM data...
	I1128 03:05:06.718059  353369 main.go:141] libmachine: Parsing certificate...
	I1128 03:05:06.718088  353369 main.go:141] libmachine: Running pre-create checks...
	I1128 03:05:06.718101  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .PreCreateCheck
	I1128 03:05:06.718265  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetConfigRaw
	I1128 03:05:06.718638  353369 main.go:141] libmachine: Creating machine...
	I1128 03:05:06.718652  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .Create
	I1128 03:05:06.718774  353369 main.go:141] libmachine: (multinode-112998-m02) Creating KVM machine...
	I1128 03:05:06.720092  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found existing default KVM network
	I1128 03:05:06.720245  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found existing private KVM network mk-multinode-112998
	I1128 03:05:06.720443  353369 main.go:141] libmachine: (multinode-112998-m02) Setting up store path in /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02 ...
	I1128 03:05:06.720470  353369 main.go:141] libmachine: (multinode-112998-m02) Building disk image from file:///home/jenkins/minikube-integration/17671-333305/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso
	I1128 03:05:06.720533  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:06.720424  353730 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 03:05:06.720641  353369 main.go:141] libmachine: (multinode-112998-m02) Downloading /home/jenkins/minikube-integration/17671-333305/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17671-333305/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso...
	I1128 03:05:06.951127  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:06.950963  353730 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/id_rsa...
	I1128 03:05:07.272065  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:07.271891  353730 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/multinode-112998-m02.rawdisk...
	I1128 03:05:07.272112  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Writing magic tar header
	I1128 03:05:07.272131  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Writing SSH key tar header
	I1128 03:05:07.272140  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:07.272006  353730 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02 ...
	I1128 03:05:07.272150  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02
	I1128 03:05:07.272157  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305/.minikube/machines
	I1128 03:05:07.272222  353369 main.go:141] libmachine: (multinode-112998-m02) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02 (perms=drwx------)
	I1128 03:05:07.272259  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 03:05:07.272273  353369 main.go:141] libmachine: (multinode-112998-m02) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305/.minikube/machines (perms=drwxr-xr-x)
	I1128 03:05:07.272285  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17671-333305
	I1128 03:05:07.272322  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1128 03:05:07.272338  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Checking permissions on dir: /home/jenkins
	I1128 03:05:07.272350  353369 main.go:141] libmachine: (multinode-112998-m02) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305/.minikube (perms=drwxr-xr-x)
	I1128 03:05:07.272364  353369 main.go:141] libmachine: (multinode-112998-m02) Setting executable bit set on /home/jenkins/minikube-integration/17671-333305 (perms=drwxrwxr-x)
	I1128 03:05:07.272375  353369 main.go:141] libmachine: (multinode-112998-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1128 03:05:07.272387  353369 main.go:141] libmachine: (multinode-112998-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1128 03:05:07.272398  353369 main.go:141] libmachine: (multinode-112998-m02) Creating domain...
	I1128 03:05:07.272412  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Checking permissions on dir: /home
	I1128 03:05:07.272424  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Skipping /home - not owner
	I1128 03:05:07.273173  353369 main.go:141] libmachine: (multinode-112998-m02) define libvirt domain using xml: 
	I1128 03:05:07.273213  353369 main.go:141] libmachine: (multinode-112998-m02) <domain type='kvm'>
	I1128 03:05:07.273227  353369 main.go:141] libmachine: (multinode-112998-m02)   <name>multinode-112998-m02</name>
	I1128 03:05:07.273245  353369 main.go:141] libmachine: (multinode-112998-m02)   <memory unit='MiB'>2200</memory>
	I1128 03:05:07.273256  353369 main.go:141] libmachine: (multinode-112998-m02)   <vcpu>2</vcpu>
	I1128 03:05:07.273268  353369 main.go:141] libmachine: (multinode-112998-m02)   <features>
	I1128 03:05:07.273281  353369 main.go:141] libmachine: (multinode-112998-m02)     <acpi/>
	I1128 03:05:07.273292  353369 main.go:141] libmachine: (multinode-112998-m02)     <apic/>
	I1128 03:05:07.273306  353369 main.go:141] libmachine: (multinode-112998-m02)     <pae/>
	I1128 03:05:07.273323  353369 main.go:141] libmachine: (multinode-112998-m02)     
	I1128 03:05:07.273336  353369 main.go:141] libmachine: (multinode-112998-m02)   </features>
	I1128 03:05:07.273347  353369 main.go:141] libmachine: (multinode-112998-m02)   <cpu mode='host-passthrough'>
	I1128 03:05:07.273360  353369 main.go:141] libmachine: (multinode-112998-m02)   
	I1128 03:05:07.273371  353369 main.go:141] libmachine: (multinode-112998-m02)   </cpu>
	I1128 03:05:07.273384  353369 main.go:141] libmachine: (multinode-112998-m02)   <os>
	I1128 03:05:07.273399  353369 main.go:141] libmachine: (multinode-112998-m02)     <type>hvm</type>
	I1128 03:05:07.273412  353369 main.go:141] libmachine: (multinode-112998-m02)     <boot dev='cdrom'/>
	I1128 03:05:07.273421  353369 main.go:141] libmachine: (multinode-112998-m02)     <boot dev='hd'/>
	I1128 03:05:07.273435  353369 main.go:141] libmachine: (multinode-112998-m02)     <bootmenu enable='no'/>
	I1128 03:05:07.273446  353369 main.go:141] libmachine: (multinode-112998-m02)   </os>
	I1128 03:05:07.273459  353369 main.go:141] libmachine: (multinode-112998-m02)   <devices>
	I1128 03:05:07.273473  353369 main.go:141] libmachine: (multinode-112998-m02)     <disk type='file' device='cdrom'>
	I1128 03:05:07.273513  353369 main.go:141] libmachine: (multinode-112998-m02)       <source file='/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/boot2docker.iso'/>
	I1128 03:05:07.273545  353369 main.go:141] libmachine: (multinode-112998-m02)       <target dev='hdc' bus='scsi'/>
	I1128 03:05:07.273582  353369 main.go:141] libmachine: (multinode-112998-m02)       <readonly/>
	I1128 03:05:07.273606  353369 main.go:141] libmachine: (multinode-112998-m02)     </disk>
	I1128 03:05:07.273622  353369 main.go:141] libmachine: (multinode-112998-m02)     <disk type='file' device='disk'>
	I1128 03:05:07.273635  353369 main.go:141] libmachine: (multinode-112998-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1128 03:05:07.273653  353369 main.go:141] libmachine: (multinode-112998-m02)       <source file='/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/multinode-112998-m02.rawdisk'/>
	I1128 03:05:07.273666  353369 main.go:141] libmachine: (multinode-112998-m02)       <target dev='hda' bus='virtio'/>
	I1128 03:05:07.273685  353369 main.go:141] libmachine: (multinode-112998-m02)     </disk>
	I1128 03:05:07.273705  353369 main.go:141] libmachine: (multinode-112998-m02)     <interface type='network'>
	I1128 03:05:07.273722  353369 main.go:141] libmachine: (multinode-112998-m02)       <source network='mk-multinode-112998'/>
	I1128 03:05:07.273734  353369 main.go:141] libmachine: (multinode-112998-m02)       <model type='virtio'/>
	I1128 03:05:07.273744  353369 main.go:141] libmachine: (multinode-112998-m02)     </interface>
	I1128 03:05:07.273756  353369 main.go:141] libmachine: (multinode-112998-m02)     <interface type='network'>
	I1128 03:05:07.273769  353369 main.go:141] libmachine: (multinode-112998-m02)       <source network='default'/>
	I1128 03:05:07.273786  353369 main.go:141] libmachine: (multinode-112998-m02)       <model type='virtio'/>
	I1128 03:05:07.273799  353369 main.go:141] libmachine: (multinode-112998-m02)     </interface>
	I1128 03:05:07.273810  353369 main.go:141] libmachine: (multinode-112998-m02)     <serial type='pty'>
	I1128 03:05:07.273824  353369 main.go:141] libmachine: (multinode-112998-m02)       <target port='0'/>
	I1128 03:05:07.273836  353369 main.go:141] libmachine: (multinode-112998-m02)     </serial>
	I1128 03:05:07.273846  353369 main.go:141] libmachine: (multinode-112998-m02)     <console type='pty'>
	I1128 03:05:07.273863  353369 main.go:141] libmachine: (multinode-112998-m02)       <target type='serial' port='0'/>
	I1128 03:05:07.273880  353369 main.go:141] libmachine: (multinode-112998-m02)     </console>
	I1128 03:05:07.273892  353369 main.go:141] libmachine: (multinode-112998-m02)     <rng model='virtio'>
	I1128 03:05:07.273906  353369 main.go:141] libmachine: (multinode-112998-m02)       <backend model='random'>/dev/random</backend>
	I1128 03:05:07.273920  353369 main.go:141] libmachine: (multinode-112998-m02)     </rng>
	I1128 03:05:07.273930  353369 main.go:141] libmachine: (multinode-112998-m02)     
	I1128 03:05:07.273942  353369 main.go:141] libmachine: (multinode-112998-m02)     
	I1128 03:05:07.273958  353369 main.go:141] libmachine: (multinode-112998-m02)   </devices>
	I1128 03:05:07.273969  353369 main.go:141] libmachine: (multinode-112998-m02) </domain>
	I1128 03:05:07.273984  353369 main.go:141] libmachine: (multinode-112998-m02) 
	I1128 03:05:07.280517  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:4d:5e:d4 in network default
	I1128 03:05:07.281079  353369 main.go:141] libmachine: (multinode-112998-m02) Ensuring networks are active...
	I1128 03:05:07.281117  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:07.281716  353369 main.go:141] libmachine: (multinode-112998-m02) Ensuring network default is active
	I1128 03:05:07.281980  353369 main.go:141] libmachine: (multinode-112998-m02) Ensuring network mk-multinode-112998 is active
	I1128 03:05:07.282315  353369 main.go:141] libmachine: (multinode-112998-m02) Getting domain xml...
	I1128 03:05:07.283005  353369 main.go:141] libmachine: (multinode-112998-m02) Creating domain...
	I1128 03:05:08.526215  353369 main.go:141] libmachine: (multinode-112998-m02) Waiting to get IP...
	I1128 03:05:08.526991  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:08.527400  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find current IP address of domain multinode-112998-m02 in network mk-multinode-112998
	I1128 03:05:08.527421  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:08.527388  353730 retry.go:31] will retry after 229.469077ms: waiting for machine to come up
	I1128 03:05:08.759443  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:08.760010  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find current IP address of domain multinode-112998-m02 in network mk-multinode-112998
	I1128 03:05:08.760046  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:08.759950  353730 retry.go:31] will retry after 239.87892ms: waiting for machine to come up
	I1128 03:05:09.001360  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:09.001785  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find current IP address of domain multinode-112998-m02 in network mk-multinode-112998
	I1128 03:05:09.001817  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:09.001727  353730 retry.go:31] will retry after 470.819982ms: waiting for machine to come up
	I1128 03:05:09.474391  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:09.474799  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find current IP address of domain multinode-112998-m02 in network mk-multinode-112998
	I1128 03:05:09.474831  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:09.474738  353730 retry.go:31] will retry after 427.571057ms: waiting for machine to come up
	I1128 03:05:09.904517  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:09.905048  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find current IP address of domain multinode-112998-m02 in network mk-multinode-112998
	I1128 03:05:09.905079  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:09.904987  353730 retry.go:31] will retry after 458.684124ms: waiting for machine to come up
	I1128 03:05:10.365534  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:10.365929  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find current IP address of domain multinode-112998-m02 in network mk-multinode-112998
	I1128 03:05:10.365956  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:10.365871  353730 retry.go:31] will retry after 716.623346ms: waiting for machine to come up
	I1128 03:05:11.083818  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:11.084397  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find current IP address of domain multinode-112998-m02 in network mk-multinode-112998
	I1128 03:05:11.084419  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:11.084310  353730 retry.go:31] will retry after 1.155195063s: waiting for machine to come up
	I1128 03:05:12.241456  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:12.241902  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find current IP address of domain multinode-112998-m02 in network mk-multinode-112998
	I1128 03:05:12.241936  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:12.241848  353730 retry.go:31] will retry after 915.55694ms: waiting for machine to come up
	I1128 03:05:13.158971  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:13.159384  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find current IP address of domain multinode-112998-m02 in network mk-multinode-112998
	I1128 03:05:13.159414  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:13.159325  353730 retry.go:31] will retry after 1.477690898s: waiting for machine to come up
	I1128 03:05:14.639068  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:14.639747  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find current IP address of domain multinode-112998-m02 in network mk-multinode-112998
	I1128 03:05:14.639783  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:14.639689  353730 retry.go:31] will retry after 2.219712318s: waiting for machine to come up
	I1128 03:05:16.860874  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:16.861286  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find current IP address of domain multinode-112998-m02 in network mk-multinode-112998
	I1128 03:05:16.861320  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:16.861231  353730 retry.go:31] will retry after 1.930151542s: waiting for machine to come up
	I1128 03:05:18.794389  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:18.794938  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find current IP address of domain multinode-112998-m02 in network mk-multinode-112998
	I1128 03:05:18.794974  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:18.794889  353730 retry.go:31] will retry after 3.127015436s: waiting for machine to come up
	I1128 03:05:21.923934  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:21.924360  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find current IP address of domain multinode-112998-m02 in network mk-multinode-112998
	I1128 03:05:21.924394  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:21.924297  353730 retry.go:31] will retry after 2.833208179s: waiting for machine to come up
	I1128 03:05:24.761394  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:24.761780  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find current IP address of domain multinode-112998-m02 in network mk-multinode-112998
	I1128 03:05:24.761820  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | I1128 03:05:24.761761  353730 retry.go:31] will retry after 4.861305847s: waiting for machine to come up
	I1128 03:05:29.626060  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:29.626539  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has current primary IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:29.626579  353369 main.go:141] libmachine: (multinode-112998-m02) Found IP for machine: 192.168.39.31
	I1128 03:05:29.626599  353369 main.go:141] libmachine: (multinode-112998-m02) Reserving static IP address...
	I1128 03:05:29.627026  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find host DHCP lease matching {name: "multinode-112998-m02", mac: "52:54:00:f0:32:00", ip: "192.168.39.31"} in network mk-multinode-112998
	I1128 03:05:29.700692  353369 main.go:141] libmachine: (multinode-112998-m02) Reserved static IP address: 192.168.39.31
	I1128 03:05:29.700794  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Getting to WaitForSSH function...
	I1128 03:05:29.700810  353369 main.go:141] libmachine: (multinode-112998-m02) Waiting for SSH to be available...
	I1128 03:05:29.703405  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:29.703721  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998
	I1128 03:05:29.703743  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | unable to find defined IP address of network mk-multinode-112998 interface with MAC address 52:54:00:f0:32:00
	I1128 03:05:29.703898  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Using SSH client type: external
	I1128 03:05:29.703941  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/id_rsa (-rw-------)
	I1128 03:05:29.703977  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 03:05:29.704000  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | About to run SSH command:
	I1128 03:05:29.704034  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | exit 0
	I1128 03:05:29.708515  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | SSH cmd err, output: exit status 255: 
	I1128 03:05:29.708542  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1128 03:05:29.708551  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | command : exit 0
	I1128 03:05:29.708557  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | err     : exit status 255
	I1128 03:05:29.708565  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | output  : 
	I1128 03:05:32.709559  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Getting to WaitForSSH function...
	I1128 03:05:32.712077  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:32.712497  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:32.712533  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:32.712711  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Using SSH client type: external
	I1128 03:05:32.712744  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/id_rsa (-rw-------)
	I1128 03:05:32.712778  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.31 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 03:05:32.712794  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | About to run SSH command:
	I1128 03:05:32.712821  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | exit 0
	I1128 03:05:32.800761  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | SSH cmd err, output: <nil>: 
	I1128 03:05:32.801111  353369 main.go:141] libmachine: (multinode-112998-m02) KVM machine creation complete!
	I1128 03:05:32.801475  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetConfigRaw
	I1128 03:05:32.802030  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:05:32.802267  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:05:32.802431  353369 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1128 03:05:32.802447  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetState
	I1128 03:05:32.803842  353369 main.go:141] libmachine: Detecting operating system of created instance...
	I1128 03:05:32.803864  353369 main.go:141] libmachine: Waiting for SSH to be available...
	I1128 03:05:32.803874  353369 main.go:141] libmachine: Getting to WaitForSSH function...
	I1128 03:05:32.803885  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:05:32.806769  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:32.807187  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:32.807226  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:32.807358  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:05:32.807548  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:32.807726  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:32.807890  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:05:32.808064  353369 main.go:141] libmachine: Using SSH client type: native
	I1128 03:05:32.808503  353369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1128 03:05:32.808519  353369 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1128 03:05:32.924311  353369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 03:05:32.924338  353369 main.go:141] libmachine: Detecting the provisioner...
	I1128 03:05:32.924362  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:05:32.926987  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:32.927358  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:32.927384  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:32.927555  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:05:32.927776  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:32.927949  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:32.928057  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:05:32.928288  353369 main.go:141] libmachine: Using SSH client type: native
	I1128 03:05:32.928646  353369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1128 03:05:32.928658  353369 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1128 03:05:33.046048  353369 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g21ec34a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I1128 03:05:33.046136  353369 main.go:141] libmachine: found compatible host: buildroot
	I1128 03:05:33.046151  353369 main.go:141] libmachine: Provisioning with buildroot...
	I1128 03:05:33.046166  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetMachineName
	I1128 03:05:33.046474  353369 buildroot.go:166] provisioning hostname "multinode-112998-m02"
	I1128 03:05:33.046507  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetMachineName
	I1128 03:05:33.046698  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:05:33.049649  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:33.049992  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:33.050031  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:33.050169  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:05:33.050346  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:33.050500  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:33.050653  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:05:33.050822  353369 main.go:141] libmachine: Using SSH client type: native
	I1128 03:05:33.051158  353369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1128 03:05:33.051173  353369 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-112998-m02 && echo "multinode-112998-m02" | sudo tee /etc/hostname
	I1128 03:05:33.185080  353369 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-112998-m02
	
	I1128 03:05:33.185117  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:05:33.187611  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:33.187960  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:33.187992  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:33.188195  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:05:33.188417  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:33.188599  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:33.188739  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:05:33.188922  353369 main.go:141] libmachine: Using SSH client type: native
	I1128 03:05:33.189306  353369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1128 03:05:33.189328  353369 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-112998-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-112998-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-112998-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 03:05:33.313920  353369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 03:05:33.313963  353369 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 03:05:33.313987  353369 buildroot.go:174] setting up certificates
	I1128 03:05:33.314005  353369 provision.go:83] configureAuth start
	I1128 03:05:33.314020  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetMachineName
	I1128 03:05:33.314313  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetIP
	I1128 03:05:33.316864  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:33.317338  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:33.317372  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:33.317574  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:05:33.319898  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:33.320244  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:33.320274  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:33.320454  353369 provision.go:138] copyHostCerts
	I1128 03:05:33.320481  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 03:05:33.320524  353369 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 03:05:33.320533  353369 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 03:05:33.320595  353369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 03:05:33.320665  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 03:05:33.320681  353369 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 03:05:33.320690  353369 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 03:05:33.320716  353369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 03:05:33.320799  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 03:05:33.320820  353369 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 03:05:33.320824  353369 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 03:05:33.320849  353369 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 03:05:33.320914  353369 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.multinode-112998-m02 san=[192.168.39.31 192.168.39.31 localhost 127.0.0.1 minikube multinode-112998-m02]
	I1128 03:05:33.630130  353369 provision.go:172] copyRemoteCerts
	I1128 03:05:33.630219  353369 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 03:05:33.630264  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:05:33.633354  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:33.633812  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:33.633842  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:33.634022  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:05:33.634319  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:33.634499  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:05:33.634673  353369 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/id_rsa Username:docker}
	I1128 03:05:33.724118  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1128 03:05:33.724193  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 03:05:33.749533  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1128 03:05:33.749629  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1128 03:05:33.774410  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1128 03:05:33.774481  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 03:05:33.798049  353369 provision.go:86] duration metric: configureAuth took 484.027544ms
	I1128 03:05:33.798094  353369 buildroot.go:189] setting minikube options for container-runtime
	I1128 03:05:33.798320  353369 config.go:182] Loaded profile config "multinode-112998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:05:33.798408  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:05:33.801227  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:33.801605  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:33.801628  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:33.801853  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:05:33.802045  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:33.802256  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:33.802414  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:05:33.802617  353369 main.go:141] libmachine: Using SSH client type: native
	I1128 03:05:33.802933  353369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1128 03:05:33.802949  353369 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 03:05:34.114799  353369 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 03:05:34.114833  353369 main.go:141] libmachine: Checking connection to Docker...
	I1128 03:05:34.114844  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetURL
	I1128 03:05:34.116308  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | Using libvirt version 6000000
	I1128 03:05:34.118807  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:34.119209  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:34.119241  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:34.119449  353369 main.go:141] libmachine: Docker is up and running!
	I1128 03:05:34.119470  353369 main.go:141] libmachine: Reticulating splines...
	I1128 03:05:34.119480  353369 client.go:171] LocalClient.Create took 27.40165589s
	I1128 03:05:34.119513  353369 start.go:167] duration metric: libmachine.API.Create for "multinode-112998" took 27.4017318s
	I1128 03:05:34.119527  353369 start.go:300] post-start starting for "multinode-112998-m02" (driver="kvm2")
	I1128 03:05:34.119542  353369 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 03:05:34.119568  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:05:34.119833  353369 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 03:05:34.119864  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:05:34.122168  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:34.122705  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:34.122735  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:34.122901  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:05:34.123097  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:34.123270  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:05:34.123438  353369 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/id_rsa Username:docker}
	I1128 03:05:34.211854  353369 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 03:05:34.216625  353369 command_runner.go:130] > NAME=Buildroot
	I1128 03:05:34.216653  353369 command_runner.go:130] > VERSION=2021.02.12-1-g21ec34a-dirty
	I1128 03:05:34.216661  353369 command_runner.go:130] > ID=buildroot
	I1128 03:05:34.216669  353369 command_runner.go:130] > VERSION_ID=2021.02.12
	I1128 03:05:34.216677  353369 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1128 03:05:34.216807  353369 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 03:05:34.216833  353369 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 03:05:34.216935  353369 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 03:05:34.217042  353369 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 03:05:34.217056  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> /etc/ssl/certs/3405152.pem
	I1128 03:05:34.217158  353369 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 03:05:34.226318  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 03:05:34.249807  353369 start.go:303] post-start completed in 130.261562ms
	I1128 03:05:34.249862  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetConfigRaw
	I1128 03:05:34.250506  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetIP
	I1128 03:05:34.253264  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:34.253710  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:34.253748  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:34.254051  353369 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/config.json ...
	I1128 03:05:34.254241  353369 start.go:128] duration metric: createHost completed in 27.555199113s
	I1128 03:05:34.254265  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:05:34.256895  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:34.257235  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:34.257263  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:34.257441  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:05:34.257643  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:34.257813  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:34.257991  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:05:34.258166  353369 main.go:141] libmachine: Using SSH client type: native
	I1128 03:05:34.258483  353369 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1128 03:05:34.258494  353369 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 03:05:34.373586  353369 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701140734.360425258
	
	I1128 03:05:34.373617  353369 fix.go:206] guest clock: 1701140734.360425258
	I1128 03:05:34.373627  353369 fix.go:219] Guest: 2023-11-28 03:05:34.360425258 +0000 UTC Remote: 2023-11-28 03:05:34.25425291 +0000 UTC m=+94.824998390 (delta=106.172348ms)
	I1128 03:05:34.373648  353369 fix.go:190] guest clock delta is within tolerance: 106.172348ms
	I1128 03:05:34.373655  353369 start.go:83] releasing machines lock for "multinode-112998-m02", held for 27.674743494s
	I1128 03:05:34.373673  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:05:34.373939  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetIP
	I1128 03:05:34.376792  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:34.377191  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:34.377229  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:34.379681  353369 out.go:177] * Found network options:
	I1128 03:05:34.381119  353369 out.go:177]   - NO_PROXY=192.168.39.73
	W1128 03:05:34.382520  353369 proxy.go:119] fail to check proxy env: Error ip not in block
	I1128 03:05:34.382577  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:05:34.383183  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:05:34.383381  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:05:34.383482  353369 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 03:05:34.383522  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	W1128 03:05:34.383542  353369 proxy.go:119] fail to check proxy env: Error ip not in block
	I1128 03:05:34.383613  353369 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 03:05:34.383629  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:05:34.386207  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:34.386493  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:34.386530  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:34.386552  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:34.386722  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:05:34.386891  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:34.387076  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:05:34.387120  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:34.387166  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:34.387216  353369 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/id_rsa Username:docker}
	I1128 03:05:34.387337  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:05:34.387519  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:05:34.387677  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:05:34.387855  353369 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/id_rsa Username:docker}
	I1128 03:05:34.631401  353369 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 03:05:34.631437  353369 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1128 03:05:34.637717  353369 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1128 03:05:34.637771  353369 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 03:05:34.637840  353369 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 03:05:34.652683  353369 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1128 03:05:34.652757  353369 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 03:05:34.652770  353369 start.go:472] detecting cgroup driver to use...
	I1128 03:05:34.652852  353369 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 03:05:34.666289  353369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 03:05:34.679274  353369 docker.go:203] disabling cri-docker service (if available) ...
	I1128 03:05:34.679354  353369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 03:05:34.693179  353369 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 03:05:34.705893  353369 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 03:05:34.807950  353369 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1128 03:05:34.808065  353369 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 03:05:34.925700  353369 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1128 03:05:34.925747  353369 docker.go:219] disabling docker service ...
	I1128 03:05:34.925831  353369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 03:05:34.940155  353369 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 03:05:34.952866  353369 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1128 03:05:34.952981  353369 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 03:05:35.067953  353369 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1128 03:05:35.068065  353369 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 03:05:35.081880  353369 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1128 03:05:35.082175  353369 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1128 03:05:35.182998  353369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 03:05:35.197241  353369 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 03:05:35.215425  353369 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1128 03:05:35.215492  353369 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 03:05:35.215556  353369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:05:35.226873  353369 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 03:05:35.226950  353369 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:05:35.237686  353369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:05:35.246910  353369 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:05:35.257329  353369 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 03:05:35.269143  353369 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 03:05:35.279150  353369 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 03:05:35.279312  353369 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 03:05:35.279373  353369 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 03:05:35.293004  353369 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 03:05:35.302692  353369 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 03:05:35.413753  353369 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 03:05:35.586185  353369 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 03:05:35.586278  353369 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 03:05:35.591863  353369 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1128 03:05:35.591899  353369 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1128 03:05:35.591910  353369 command_runner.go:130] > Device: 16h/22d	Inode: 708         Links: 1
	I1128 03:05:35.591921  353369 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 03:05:35.591929  353369 command_runner.go:130] > Access: 2023-11-28 03:05:35.560458198 +0000
	I1128 03:05:35.591939  353369 command_runner.go:130] > Modify: 2023-11-28 03:05:35.560458198 +0000
	I1128 03:05:35.591946  353369 command_runner.go:130] > Change: 2023-11-28 03:05:35.560458198 +0000
	I1128 03:05:35.591952  353369 command_runner.go:130] >  Birth: -
	I1128 03:05:35.592149  353369 start.go:540] Will wait 60s for crictl version
	I1128 03:05:35.592217  353369 ssh_runner.go:195] Run: which crictl
	I1128 03:05:35.597465  353369 command_runner.go:130] > /usr/bin/crictl
	I1128 03:05:35.597551  353369 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 03:05:35.639237  353369 command_runner.go:130] > Version:  0.1.0
	I1128 03:05:35.639336  353369 command_runner.go:130] > RuntimeName:  cri-o
	I1128 03:05:35.639685  353369 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1128 03:05:35.639762  353369 command_runner.go:130] > RuntimeApiVersion:  v1
	I1128 03:05:35.641853  353369 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 03:05:35.641939  353369 ssh_runner.go:195] Run: crio --version
	I1128 03:05:35.688332  353369 command_runner.go:130] > crio version 1.24.1
	I1128 03:05:35.688361  353369 command_runner.go:130] > Version:          1.24.1
	I1128 03:05:35.688371  353369 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 03:05:35.688378  353369 command_runner.go:130] > GitTreeState:     dirty
	I1128 03:05:35.688387  353369 command_runner.go:130] > BuildDate:        2023-11-16T19:10:07Z
	I1128 03:05:35.688395  353369 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 03:05:35.688402  353369 command_runner.go:130] > Compiler:         gc
	I1128 03:05:35.688410  353369 command_runner.go:130] > Platform:         linux/amd64
	I1128 03:05:35.688418  353369 command_runner.go:130] > Linkmode:         dynamic
	I1128 03:05:35.688429  353369 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 03:05:35.688443  353369 command_runner.go:130] > SeccompEnabled:   true
	I1128 03:05:35.688450  353369 command_runner.go:130] > AppArmorEnabled:  false
	I1128 03:05:35.688533  353369 ssh_runner.go:195] Run: crio --version
	I1128 03:05:35.737731  353369 command_runner.go:130] > crio version 1.24.1
	I1128 03:05:35.737758  353369 command_runner.go:130] > Version:          1.24.1
	I1128 03:05:35.737768  353369 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 03:05:35.737775  353369 command_runner.go:130] > GitTreeState:     dirty
	I1128 03:05:35.737785  353369 command_runner.go:130] > BuildDate:        2023-11-16T19:10:07Z
	I1128 03:05:35.737792  353369 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 03:05:35.737798  353369 command_runner.go:130] > Compiler:         gc
	I1128 03:05:35.737806  353369 command_runner.go:130] > Platform:         linux/amd64
	I1128 03:05:35.737813  353369 command_runner.go:130] > Linkmode:         dynamic
	I1128 03:05:35.737823  353369 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 03:05:35.737835  353369 command_runner.go:130] > SeccompEnabled:   true
	I1128 03:05:35.737845  353369 command_runner.go:130] > AppArmorEnabled:  false
	I1128 03:05:35.742303  353369 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 03:05:35.743777  353369 out.go:177]   - env NO_PROXY=192.168.39.73
	I1128 03:05:35.745282  353369 main.go:141] libmachine: (multinode-112998-m02) Calling .GetIP
	I1128 03:05:35.747923  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:35.748286  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:05:35.748319  353369 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:05:35.748496  353369 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 03:05:35.752663  353369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 03:05:35.764192  353369 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998 for IP: 192.168.39.31
	I1128 03:05:35.764228  353369 certs.go:190] acquiring lock for shared ca certs: {Name:mk57c0483467fb0022a439f1b546194ca653d1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:05:35.764414  353369 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key
	I1128 03:05:35.764485  353369 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key
	I1128 03:05:35.764502  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1128 03:05:35.764524  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1128 03:05:35.764548  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1128 03:05:35.764565  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1128 03:05:35.764629  353369 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem (1338 bytes)
	W1128 03:05:35.764665  353369 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515_empty.pem, impossibly tiny 0 bytes
	I1128 03:05:35.764679  353369 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 03:05:35.764716  353369 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem (1078 bytes)
	I1128 03:05:35.764749  353369 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem (1123 bytes)
	I1128 03:05:35.764789  353369 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem (1675 bytes)
	I1128 03:05:35.764844  353369 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 03:05:35.764906  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:05:35.764928  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem -> /usr/share/ca-certificates/340515.pem
	I1128 03:05:35.764948  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> /usr/share/ca-certificates/3405152.pem
	I1128 03:05:35.765472  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 03:05:35.787740  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 03:05:35.810432  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 03:05:35.832225  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 03:05:35.854236  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 03:05:35.876237  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem --> /usr/share/ca-certificates/340515.pem (1338 bytes)
	I1128 03:05:35.899597  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /usr/share/ca-certificates/3405152.pem (1708 bytes)
	I1128 03:05:35.923639  353369 ssh_runner.go:195] Run: openssl version
	I1128 03:05:35.928708  353369 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1128 03:05:35.929016  353369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 03:05:35.938139  353369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:05:35.942371  353369 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:05:35.942435  353369 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:05:35.942506  353369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:05:35.947567  353369 command_runner.go:130] > b5213941
	I1128 03:05:35.947938  353369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 03:05:35.956912  353369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340515.pem && ln -fs /usr/share/ca-certificates/340515.pem /etc/ssl/certs/340515.pem"
	I1128 03:05:35.966032  353369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340515.pem
	I1128 03:05:35.970497  353369 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 03:05:35.970534  353369 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 03:05:35.970571  353369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340515.pem
	I1128 03:05:35.975613  353369 command_runner.go:130] > 51391683
	I1128 03:05:35.975935  353369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340515.pem /etc/ssl/certs/51391683.0"
	I1128 03:05:35.985079  353369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3405152.pem && ln -fs /usr/share/ca-certificates/3405152.pem /etc/ssl/certs/3405152.pem"
	I1128 03:05:35.994452  353369 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3405152.pem
	I1128 03:05:35.998772  353369 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 03:05:35.998809  353369 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 03:05:35.998855  353369 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3405152.pem
	I1128 03:05:36.003768  353369 command_runner.go:130] > 3ec20f2e
	I1128 03:05:36.004049  353369 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3405152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 03:05:36.013113  353369 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 03:05:36.016954  353369 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 03:05:36.017297  353369 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 03:05:36.017407  353369 ssh_runner.go:195] Run: crio config
	I1128 03:05:36.066813  353369 command_runner.go:130] ! time="2023-11-28 03:05:36.055915089Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1128 03:05:36.066852  353369 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1128 03:05:36.076776  353369 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1128 03:05:36.076806  353369 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1128 03:05:36.076816  353369 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1128 03:05:36.076822  353369 command_runner.go:130] > #
	I1128 03:05:36.076833  353369 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1128 03:05:36.076842  353369 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1128 03:05:36.076851  353369 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1128 03:05:36.076861  353369 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1128 03:05:36.076870  353369 command_runner.go:130] > # reload'.
	I1128 03:05:36.076893  353369 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1128 03:05:36.076907  353369 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1128 03:05:36.076917  353369 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1128 03:05:36.076928  353369 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1128 03:05:36.076936  353369 command_runner.go:130] > [crio]
	I1128 03:05:36.076945  353369 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1128 03:05:36.076954  353369 command_runner.go:130] > # containers images, in this directory.
	I1128 03:05:36.076962  353369 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1128 03:05:36.076992  353369 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1128 03:05:36.077012  353369 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1128 03:05:36.077022  353369 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1128 03:05:36.077035  353369 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1128 03:05:36.077045  353369 command_runner.go:130] > storage_driver = "overlay"
	I1128 03:05:36.077056  353369 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1128 03:05:36.077072  353369 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1128 03:05:36.077082  353369 command_runner.go:130] > storage_option = [
	I1128 03:05:36.077090  353369 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1128 03:05:36.077094  353369 command_runner.go:130] > ]
	I1128 03:05:36.077103  353369 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1128 03:05:36.077111  353369 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1128 03:05:36.077118  353369 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1128 03:05:36.077124  353369 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1128 03:05:36.077132  353369 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1128 03:05:36.077137  353369 command_runner.go:130] > # always happen on a node reboot
	I1128 03:05:36.077145  353369 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1128 03:05:36.077150  353369 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1128 03:05:36.077159  353369 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1128 03:05:36.077169  353369 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1128 03:05:36.077177  353369 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1128 03:05:36.077184  353369 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1128 03:05:36.077192  353369 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1128 03:05:36.077199  353369 command_runner.go:130] > # internal_wipe = true
	I1128 03:05:36.077205  353369 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1128 03:05:36.077214  353369 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1128 03:05:36.077219  353369 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1128 03:05:36.077227  353369 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1128 03:05:36.077233  353369 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1128 03:05:36.077239  353369 command_runner.go:130] > [crio.api]
	I1128 03:05:36.077245  353369 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1128 03:05:36.077251  353369 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1128 03:05:36.077256  353369 command_runner.go:130] > # IP address on which the stream server will listen.
	I1128 03:05:36.077263  353369 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1128 03:05:36.077270  353369 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1128 03:05:36.077277  353369 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1128 03:05:36.077283  353369 command_runner.go:130] > # stream_port = "0"
	I1128 03:05:36.077289  353369 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1128 03:05:36.077295  353369 command_runner.go:130] > # stream_enable_tls = false
	I1128 03:05:36.077301  353369 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1128 03:05:36.077308  353369 command_runner.go:130] > # stream_idle_timeout = ""
	I1128 03:05:36.077314  353369 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1128 03:05:36.077323  353369 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1128 03:05:36.077329  353369 command_runner.go:130] > # minutes.
	I1128 03:05:36.077333  353369 command_runner.go:130] > # stream_tls_cert = ""
	I1128 03:05:36.077341  353369 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1128 03:05:36.077350  353369 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1128 03:05:36.077355  353369 command_runner.go:130] > # stream_tls_key = ""
	I1128 03:05:36.077363  353369 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1128 03:05:36.077369  353369 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1128 03:05:36.077374  353369 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1128 03:05:36.077377  353369 command_runner.go:130] > # stream_tls_ca = ""
	I1128 03:05:36.077385  353369 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 03:05:36.077391  353369 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1128 03:05:36.077398  353369 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 03:05:36.077405  353369 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1128 03:05:36.077421  353369 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1128 03:05:36.077432  353369 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1128 03:05:36.077436  353369 command_runner.go:130] > [crio.runtime]
	I1128 03:05:36.077442  353369 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1128 03:05:36.077447  353369 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1128 03:05:36.077453  353369 command_runner.go:130] > # "nofile=1024:2048"
	I1128 03:05:36.077459  353369 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1128 03:05:36.077465  353369 command_runner.go:130] > # default_ulimits = [
	I1128 03:05:36.077469  353369 command_runner.go:130] > # ]
	I1128 03:05:36.077477  353369 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1128 03:05:36.077483  353369 command_runner.go:130] > # no_pivot = false
	I1128 03:05:36.077489  353369 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1128 03:05:36.077497  353369 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1128 03:05:36.077504  353369 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1128 03:05:36.077512  353369 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1128 03:05:36.077519  353369 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1128 03:05:36.077525  353369 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 03:05:36.077532  353369 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1128 03:05:36.077537  353369 command_runner.go:130] > # Cgroup setting for conmon
	I1128 03:05:36.077546  353369 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1128 03:05:36.077552  353369 command_runner.go:130] > conmon_cgroup = "pod"
	I1128 03:05:36.077559  353369 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1128 03:05:36.077567  353369 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1128 03:05:36.077576  353369 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 03:05:36.077582  353369 command_runner.go:130] > conmon_env = [
	I1128 03:05:36.077588  353369 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1128 03:05:36.077594  353369 command_runner.go:130] > ]
	I1128 03:05:36.077622  353369 command_runner.go:130] > # Additional environment variables to set for all the
	I1128 03:05:36.077642  353369 command_runner.go:130] > # containers. These are overridden if set in the
	I1128 03:05:36.077648  353369 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1128 03:05:36.077652  353369 command_runner.go:130] > # default_env = [
	I1128 03:05:36.077655  353369 command_runner.go:130] > # ]
	I1128 03:05:36.077660  353369 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1128 03:05:36.077665  353369 command_runner.go:130] > # selinux = false
	I1128 03:05:36.077671  353369 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1128 03:05:36.077677  353369 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1128 03:05:36.077682  353369 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1128 03:05:36.077686  353369 command_runner.go:130] > # seccomp_profile = ""
	I1128 03:05:36.077692  353369 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1128 03:05:36.077697  353369 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1128 03:05:36.077706  353369 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1128 03:05:36.077712  353369 command_runner.go:130] > # which might increase security.
	I1128 03:05:36.077716  353369 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1128 03:05:36.077723  353369 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1128 03:05:36.077732  353369 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1128 03:05:36.077738  353369 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1128 03:05:36.077746  353369 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1128 03:05:36.077753  353369 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:05:36.077758  353369 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1128 03:05:36.077764  353369 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1128 03:05:36.077770  353369 command_runner.go:130] > # the cgroup blockio controller.
	I1128 03:05:36.077775  353369 command_runner.go:130] > # blockio_config_file = ""
	I1128 03:05:36.077783  353369 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1128 03:05:36.077790  353369 command_runner.go:130] > # irqbalance daemon.
	I1128 03:05:36.077795  353369 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1128 03:05:36.077803  353369 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1128 03:05:36.077811  353369 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:05:36.077815  353369 command_runner.go:130] > # rdt_config_file = ""
	I1128 03:05:36.077823  353369 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1128 03:05:36.077830  353369 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1128 03:05:36.077836  353369 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1128 03:05:36.077842  353369 command_runner.go:130] > # separate_pull_cgroup = ""
	I1128 03:05:36.077848  353369 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1128 03:05:36.077857  353369 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1128 03:05:36.077861  353369 command_runner.go:130] > # will be added.
	I1128 03:05:36.077866  353369 command_runner.go:130] > # default_capabilities = [
	I1128 03:05:36.077872  353369 command_runner.go:130] > # 	"CHOWN",
	I1128 03:05:36.077877  353369 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1128 03:05:36.077883  353369 command_runner.go:130] > # 	"FSETID",
	I1128 03:05:36.077887  353369 command_runner.go:130] > # 	"FOWNER",
	I1128 03:05:36.077893  353369 command_runner.go:130] > # 	"SETGID",
	I1128 03:05:36.077897  353369 command_runner.go:130] > # 	"SETUID",
	I1128 03:05:36.077903  353369 command_runner.go:130] > # 	"SETPCAP",
	I1128 03:05:36.077907  353369 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1128 03:05:36.077913  353369 command_runner.go:130] > # 	"KILL",
	I1128 03:05:36.077916  353369 command_runner.go:130] > # ]
	I1128 03:05:36.077925  353369 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1128 03:05:36.077931  353369 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 03:05:36.077937  353369 command_runner.go:130] > # default_sysctls = [
	I1128 03:05:36.077944  353369 command_runner.go:130] > # ]
	I1128 03:05:36.077951  353369 command_runner.go:130] > # List of devices on the host that a
	I1128 03:05:36.077957  353369 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1128 03:05:36.077963  353369 command_runner.go:130] > # allowed_devices = [
	I1128 03:05:36.077967  353369 command_runner.go:130] > # 	"/dev/fuse",
	I1128 03:05:36.077973  353369 command_runner.go:130] > # ]
	I1128 03:05:36.077978  353369 command_runner.go:130] > # List of additional devices. specified as
	I1128 03:05:36.077987  353369 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1128 03:05:36.077999  353369 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1128 03:05:36.078019  353369 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 03:05:36.078026  353369 command_runner.go:130] > # additional_devices = [
	I1128 03:05:36.078029  353369 command_runner.go:130] > # ]
	I1128 03:05:36.078034  353369 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1128 03:05:36.078041  353369 command_runner.go:130] > # cdi_spec_dirs = [
	I1128 03:05:36.078045  353369 command_runner.go:130] > # 	"/etc/cdi",
	I1128 03:05:36.078052  353369 command_runner.go:130] > # 	"/var/run/cdi",
	I1128 03:05:36.078056  353369 command_runner.go:130] > # ]
	I1128 03:05:36.078067  353369 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1128 03:05:36.078080  353369 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1128 03:05:36.078090  353369 command_runner.go:130] > # Defaults to false.
	I1128 03:05:36.078101  353369 command_runner.go:130] > # device_ownership_from_security_context = false
	I1128 03:05:36.078113  353369 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1128 03:05:36.078126  353369 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1128 03:05:36.078132  353369 command_runner.go:130] > # hooks_dir = [
	I1128 03:05:36.078143  353369 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1128 03:05:36.078151  353369 command_runner.go:130] > # ]
	I1128 03:05:36.078164  353369 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1128 03:05:36.078176  353369 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1128 03:05:36.078188  353369 command_runner.go:130] > # its default mounts from the following two files:
	I1128 03:05:36.078196  353369 command_runner.go:130] > #
	I1128 03:05:36.078209  353369 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1128 03:05:36.078222  353369 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1128 03:05:36.078235  353369 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1128 03:05:36.078242  353369 command_runner.go:130] > #
	I1128 03:05:36.078248  353369 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1128 03:05:36.078257  353369 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1128 03:05:36.078265  353369 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1128 03:05:36.078272  353369 command_runner.go:130] > #      only add mounts it finds in this file.
	I1128 03:05:36.078276  353369 command_runner.go:130] > #
	I1128 03:05:36.078281  353369 command_runner.go:130] > # default_mounts_file = ""
	I1128 03:05:36.078289  353369 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1128 03:05:36.078296  353369 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1128 03:05:36.078303  353369 command_runner.go:130] > pids_limit = 1024
	I1128 03:05:36.078309  353369 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1128 03:05:36.078317  353369 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1128 03:05:36.078325  353369 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1128 03:05:36.078335  353369 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1128 03:05:36.078341  353369 command_runner.go:130] > # log_size_max = -1
	I1128 03:05:36.078348  353369 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1128 03:05:36.078354  353369 command_runner.go:130] > # log_to_journald = false
	I1128 03:05:36.078361  353369 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1128 03:05:36.078370  353369 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1128 03:05:36.078377  353369 command_runner.go:130] > # Path to directory for container attach sockets.
	I1128 03:05:36.078382  353369 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1128 03:05:36.078389  353369 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1128 03:05:36.078396  353369 command_runner.go:130] > # bind_mount_prefix = ""
	I1128 03:05:36.078402  353369 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1128 03:05:36.078409  353369 command_runner.go:130] > # read_only = false
	I1128 03:05:36.078415  353369 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1128 03:05:36.078423  353369 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1128 03:05:36.078428  353369 command_runner.go:130] > # live configuration reload.
	I1128 03:05:36.078435  353369 command_runner.go:130] > # log_level = "info"
	I1128 03:05:36.078440  353369 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1128 03:05:36.078448  353369 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:05:36.078454  353369 command_runner.go:130] > # log_filter = ""
	I1128 03:05:36.078460  353369 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1128 03:05:36.078468  353369 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1128 03:05:36.078474  353369 command_runner.go:130] > # separated by comma.
	I1128 03:05:36.078479  353369 command_runner.go:130] > # uid_mappings = ""
	I1128 03:05:36.078487  353369 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1128 03:05:36.078495  353369 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1128 03:05:36.078501  353369 command_runner.go:130] > # separated by comma.
	I1128 03:05:36.078505  353369 command_runner.go:130] > # gid_mappings = ""
	I1128 03:05:36.078514  353369 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1128 03:05:36.078524  353369 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 03:05:36.078532  353369 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 03:05:36.078539  353369 command_runner.go:130] > # minimum_mappable_uid = -1
	I1128 03:05:36.078544  353369 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1128 03:05:36.078553  353369 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 03:05:36.078559  353369 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 03:05:36.078565  353369 command_runner.go:130] > # minimum_mappable_gid = -1
	I1128 03:05:36.078571  353369 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1128 03:05:36.078580  353369 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1128 03:05:36.078585  353369 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1128 03:05:36.078591  353369 command_runner.go:130] > # ctr_stop_timeout = 30
	I1128 03:05:36.078597  353369 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1128 03:05:36.078605  353369 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1128 03:05:36.078614  353369 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1128 03:05:36.078620  353369 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1128 03:05:36.078630  353369 command_runner.go:130] > drop_infra_ctr = false
	I1128 03:05:36.078639  353369 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1128 03:05:36.078645  353369 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1128 03:05:36.078654  353369 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1128 03:05:36.078660  353369 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1128 03:05:36.078666  353369 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1128 03:05:36.078674  353369 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1128 03:05:36.078681  353369 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1128 03:05:36.078687  353369 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1128 03:05:36.078694  353369 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1128 03:05:36.078700  353369 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1128 03:05:36.078708  353369 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1128 03:05:36.078714  353369 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1128 03:05:36.078721  353369 command_runner.go:130] > # default_runtime = "runc"
	I1128 03:05:36.078727  353369 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1128 03:05:36.078736  353369 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1128 03:05:36.078745  353369 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1128 03:05:36.078752  353369 command_runner.go:130] > # creation as a file is not desired either.
	I1128 03:05:36.078760  353369 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1128 03:05:36.078767  353369 command_runner.go:130] > # the hostname is being managed dynamically.
	I1128 03:05:36.078771  353369 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1128 03:05:36.078778  353369 command_runner.go:130] > # ]
	I1128 03:05:36.078784  353369 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1128 03:05:36.078792  353369 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1128 03:05:36.078800  353369 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1128 03:05:36.078808  353369 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1128 03:05:36.078813  353369 command_runner.go:130] > #
	I1128 03:05:36.078818  353369 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1128 03:05:36.078825  353369 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1128 03:05:36.078830  353369 command_runner.go:130] > #  runtime_type = "oci"
	I1128 03:05:36.078836  353369 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1128 03:05:36.078841  353369 command_runner.go:130] > #  privileged_without_host_devices = false
	I1128 03:05:36.078848  353369 command_runner.go:130] > #  allowed_annotations = []
	I1128 03:05:36.078851  353369 command_runner.go:130] > # Where:
	I1128 03:05:36.078858  353369 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1128 03:05:36.078866  353369 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1128 03:05:36.078875  353369 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1128 03:05:36.078883  353369 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1128 03:05:36.078889  353369 command_runner.go:130] > #   in $PATH.
	I1128 03:05:36.078895  353369 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1128 03:05:36.078902  353369 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1128 03:05:36.078908  353369 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1128 03:05:36.078914  353369 command_runner.go:130] > #   state.
	I1128 03:05:36.078920  353369 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1128 03:05:36.078928  353369 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1128 03:05:36.078937  353369 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1128 03:05:36.078943  353369 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1128 03:05:36.078952  353369 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1128 03:05:36.078960  353369 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1128 03:05:36.078967  353369 command_runner.go:130] > #   The currently recognized values are:
	I1128 03:05:36.078973  353369 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1128 03:05:36.078983  353369 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1128 03:05:36.078990  353369 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1128 03:05:36.079002  353369 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1128 03:05:36.079011  353369 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1128 03:05:36.079019  353369 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1128 03:05:36.079027  353369 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1128 03:05:36.079036  353369 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1128 03:05:36.079044  353369 command_runner.go:130] > #   should be moved to the container's cgroup
	I1128 03:05:36.079050  353369 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1128 03:05:36.079057  353369 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1128 03:05:36.079062  353369 command_runner.go:130] > runtime_type = "oci"
	I1128 03:05:36.079066  353369 command_runner.go:130] > runtime_root = "/run/runc"
	I1128 03:05:36.079073  353369 command_runner.go:130] > runtime_config_path = ""
	I1128 03:05:36.079077  353369 command_runner.go:130] > monitor_path = ""
	I1128 03:05:36.079083  353369 command_runner.go:130] > monitor_cgroup = ""
	I1128 03:05:36.079087  353369 command_runner.go:130] > monitor_exec_cgroup = ""
	I1128 03:05:36.079093  353369 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1128 03:05:36.079100  353369 command_runner.go:130] > # running containers
	I1128 03:05:36.079105  353369 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1128 03:05:36.079114  353369 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1128 03:05:36.079140  353369 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1128 03:05:36.079148  353369 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1128 03:05:36.079153  353369 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1128 03:05:36.079160  353369 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1128 03:05:36.079165  353369 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1128 03:05:36.079172  353369 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1128 03:05:36.079176  353369 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1128 03:05:36.079182  353369 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1128 03:05:36.079190  353369 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1128 03:05:36.079198  353369 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1128 03:05:36.079204  353369 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1128 03:05:36.079213  353369 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1128 03:05:36.079223  353369 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1128 03:05:36.079230  353369 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1128 03:05:36.079240  353369 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1128 03:05:36.079250  353369 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1128 03:05:36.079258  353369 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1128 03:05:36.079266  353369 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1128 03:05:36.079272  353369 command_runner.go:130] > # Example:
	I1128 03:05:36.079277  353369 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1128 03:05:36.079283  353369 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1128 03:05:36.079289  353369 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1128 03:05:36.079296  353369 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1128 03:05:36.079302  353369 command_runner.go:130] > # cpuset = 0
	I1128 03:05:36.079306  353369 command_runner.go:130] > # cpushares = "0-1"
	I1128 03:05:36.079312  353369 command_runner.go:130] > # Where:
	I1128 03:05:36.079317  353369 command_runner.go:130] > # The workload name is workload-type.
	I1128 03:05:36.079326  353369 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1128 03:05:36.079334  353369 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1128 03:05:36.079342  353369 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1128 03:05:36.079349  353369 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1128 03:05:36.079357  353369 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1128 03:05:36.079360  353369 command_runner.go:130] > # 
	I1128 03:05:36.079366  353369 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1128 03:05:36.079370  353369 command_runner.go:130] > #
	I1128 03:05:36.079378  353369 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1128 03:05:36.079386  353369 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1128 03:05:36.079393  353369 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1128 03:05:36.079402  353369 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1128 03:05:36.079410  353369 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1128 03:05:36.079416  353369 command_runner.go:130] > [crio.image]
	I1128 03:05:36.079423  353369 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1128 03:05:36.079429  353369 command_runner.go:130] > # default_transport = "docker://"
	I1128 03:05:36.079435  353369 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1128 03:05:36.079443  353369 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1128 03:05:36.079450  353369 command_runner.go:130] > # global_auth_file = ""
	I1128 03:05:36.079455  353369 command_runner.go:130] > # The image used to instantiate infra containers.
	I1128 03:05:36.079463  353369 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:05:36.079471  353369 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1128 03:05:36.079479  353369 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1128 03:05:36.079486  353369 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1128 03:05:36.079493  353369 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:05:36.079497  353369 command_runner.go:130] > # pause_image_auth_file = ""
	I1128 03:05:36.079503  353369 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1128 03:05:36.079512  353369 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1128 03:05:36.079520  353369 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1128 03:05:36.079527  353369 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1128 03:05:36.079533  353369 command_runner.go:130] > # pause_command = "/pause"
	I1128 03:05:36.079539  353369 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1128 03:05:36.079547  353369 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1128 03:05:36.079556  353369 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1128 03:05:36.079564  353369 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1128 03:05:36.079571  353369 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1128 03:05:36.079576  353369 command_runner.go:130] > # signature_policy = ""
	I1128 03:05:36.079583  353369 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1128 03:05:36.079589  353369 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1128 03:05:36.079596  353369 command_runner.go:130] > # changing them here.
	I1128 03:05:36.079600  353369 command_runner.go:130] > # insecure_registries = [
	I1128 03:05:36.079605  353369 command_runner.go:130] > # ]
	I1128 03:05:36.079613  353369 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1128 03:05:36.079621  353369 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1128 03:05:36.079628  353369 command_runner.go:130] > # image_volumes = "mkdir"
	I1128 03:05:36.079633  353369 command_runner.go:130] > # Temporary directory to use for storing big files
	I1128 03:05:36.079640  353369 command_runner.go:130] > # big_files_temporary_dir = ""
	I1128 03:05:36.079646  353369 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1128 03:05:36.079652  353369 command_runner.go:130] > # CNI plugins.
	I1128 03:05:36.079656  353369 command_runner.go:130] > [crio.network]
	I1128 03:05:36.079664  353369 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1128 03:05:36.079672  353369 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1128 03:05:36.079676  353369 command_runner.go:130] > # cni_default_network = ""
	I1128 03:05:36.079682  353369 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1128 03:05:36.079689  353369 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1128 03:05:36.079695  353369 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1128 03:05:36.079701  353369 command_runner.go:130] > # plugin_dirs = [
	I1128 03:05:36.079705  353369 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1128 03:05:36.079711  353369 command_runner.go:130] > # ]
	I1128 03:05:36.079717  353369 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1128 03:05:36.079723  353369 command_runner.go:130] > [crio.metrics]
	I1128 03:05:36.079728  353369 command_runner.go:130] > # Globally enable or disable metrics support.
	I1128 03:05:36.079734  353369 command_runner.go:130] > enable_metrics = true
	I1128 03:05:36.079739  353369 command_runner.go:130] > # Specify enabled metrics collectors.
	I1128 03:05:36.079746  353369 command_runner.go:130] > # Per default all metrics are enabled.
	I1128 03:05:36.079752  353369 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1128 03:05:36.079760  353369 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1128 03:05:36.079767  353369 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1128 03:05:36.079774  353369 command_runner.go:130] > # metrics_collectors = [
	I1128 03:05:36.079781  353369 command_runner.go:130] > # 	"operations",
	I1128 03:05:36.079785  353369 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1128 03:05:36.079793  353369 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1128 03:05:36.079798  353369 command_runner.go:130] > # 	"operations_errors",
	I1128 03:05:36.079802  353369 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1128 03:05:36.079809  353369 command_runner.go:130] > # 	"image_pulls_by_name",
	I1128 03:05:36.079814  353369 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1128 03:05:36.079820  353369 command_runner.go:130] > # 	"image_pulls_failures",
	I1128 03:05:36.079824  353369 command_runner.go:130] > # 	"image_pulls_successes",
	I1128 03:05:36.079831  353369 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1128 03:05:36.079835  353369 command_runner.go:130] > # 	"image_layer_reuse",
	I1128 03:05:36.079841  353369 command_runner.go:130] > # 	"containers_oom_total",
	I1128 03:05:36.079846  353369 command_runner.go:130] > # 	"containers_oom",
	I1128 03:05:36.079854  353369 command_runner.go:130] > # 	"processes_defunct",
	I1128 03:05:36.079858  353369 command_runner.go:130] > # 	"operations_total",
	I1128 03:05:36.079865  353369 command_runner.go:130] > # 	"operations_latency_seconds",
	I1128 03:05:36.079870  353369 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1128 03:05:36.079877  353369 command_runner.go:130] > # 	"operations_errors_total",
	I1128 03:05:36.079882  353369 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1128 03:05:36.079888  353369 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1128 03:05:36.079893  353369 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1128 03:05:36.079899  353369 command_runner.go:130] > # 	"image_pulls_success_total",
	I1128 03:05:36.079904  353369 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1128 03:05:36.079910  353369 command_runner.go:130] > # 	"containers_oom_count_total",
	I1128 03:05:36.079914  353369 command_runner.go:130] > # ]
	I1128 03:05:36.079921  353369 command_runner.go:130] > # The port on which the metrics server will listen.
	I1128 03:05:36.079925  353369 command_runner.go:130] > # metrics_port = 9090
	I1128 03:05:36.079933  353369 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1128 03:05:36.079937  353369 command_runner.go:130] > # metrics_socket = ""
	I1128 03:05:36.079943  353369 command_runner.go:130] > # The certificate for the secure metrics server.
	I1128 03:05:36.079951  353369 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1128 03:05:36.079959  353369 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1128 03:05:36.079966  353369 command_runner.go:130] > # certificate on any modification event.
	I1128 03:05:36.079970  353369 command_runner.go:130] > # metrics_cert = ""
	I1128 03:05:36.079976  353369 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1128 03:05:36.079983  353369 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1128 03:05:36.079986  353369 command_runner.go:130] > # metrics_key = ""
	I1128 03:05:36.079998  353369 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1128 03:05:36.080004  353369 command_runner.go:130] > [crio.tracing]
	I1128 03:05:36.080010  353369 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1128 03:05:36.080016  353369 command_runner.go:130] > # enable_tracing = false
	I1128 03:05:36.080021  353369 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1128 03:05:36.080028  353369 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1128 03:05:36.080033  353369 command_runner.go:130] > # Number of samples to collect per million spans.
	I1128 03:05:36.080040  353369 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1128 03:05:36.080045  353369 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1128 03:05:36.080052  353369 command_runner.go:130] > [crio.stats]
	I1128 03:05:36.080058  353369 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1128 03:05:36.080066  353369 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1128 03:05:36.080073  353369 command_runner.go:130] > # stats_collection_period = 0
	I1128 03:05:36.080143  353369 cni.go:84] Creating CNI manager for ""
	I1128 03:05:36.080152  353369 cni.go:136] 2 nodes found, recommending kindnet
	I1128 03:05:36.080164  353369 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 03:05:36.080185  353369 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.31 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-112998 NodeName:multinode-112998-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 03:05:36.080349  353369 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-112998-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 03:05:36.080426  353369 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-112998-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-112998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 03:05:36.080481  353369 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 03:05:36.092030  353369 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	I1128 03:05:36.092066  353369 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I1128 03:05:36.092111  353369 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I1128 03:05:36.102839  353369 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I1128 03:05:36.102853  353369 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I1128 03:05:36.102877  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I1128 03:05:36.102972  353369 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I1128 03:05:36.102853  353369 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I1128 03:05:36.107347  353369 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1128 03:05:36.107391  353369 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I1128 03:05:36.107408  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I1128 03:05:39.524136  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1128 03:05:39.524227  353369 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I1128 03:05:39.529654  353369 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1128 03:05:39.529906  353369 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I1128 03:05:39.529945  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I1128 03:05:43.898070  353369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 03:05:43.914037  353369 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I1128 03:05:43.914153  353369 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I1128 03:05:43.918987  353369 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1128 03:05:43.919053  353369 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I1128 03:05:43.919078  353369 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I1128 03:05:44.425439  353369 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1128 03:05:44.433728  353369 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1128 03:05:44.451863  353369 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 03:05:44.470219  353369 ssh_runner.go:195] Run: grep 192.168.39.73	control-plane.minikube.internal$ /etc/hosts
	I1128 03:05:44.474312  353369 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 03:05:44.487007  353369 host.go:66] Checking if "multinode-112998" exists ...
	I1128 03:05:44.487316  353369 config.go:182] Loaded profile config "multinode-112998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:05:44.487416  353369 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:05:44.487469  353369 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:05:44.501995  353369 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33753
	I1128 03:05:44.502450  353369 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:05:44.502948  353369 main.go:141] libmachine: Using API Version  1
	I1128 03:05:44.502969  353369 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:05:44.503282  353369 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:05:44.503516  353369 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:05:44.503673  353369 start.go:304] JoinCluster: &{Name:multinode-112998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-112998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 03:05:44.503777  353369 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1128 03:05:44.503799  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:05:44.506845  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:05:44.507233  353369 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:05:44.507275  353369 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:05:44.507407  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:05:44.507595  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:05:44.507741  353369 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:05:44.507921  353369 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:05:44.688323  353369 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token p16c4x.lqt94waaplt52x78 --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 03:05:44.699109  353369 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.31 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1128 03:05:44.699169  353369 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p16c4x.lqt94waaplt52x78 --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-112998-m02"
	I1128 03:05:44.747795  353369 command_runner.go:130] ! W1128 03:05:44.742387     821 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1128 03:05:44.878765  353369 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 03:05:47.594244  353369 command_runner.go:130] > [preflight] Running pre-flight checks
	I1128 03:05:47.594276  353369 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1128 03:05:47.594296  353369 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1128 03:05:47.594309  353369 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 03:05:47.594318  353369 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 03:05:47.594326  353369 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1128 03:05:47.594340  353369 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1128 03:05:47.594354  353369 command_runner.go:130] > This node has joined the cluster:
	I1128 03:05:47.594367  353369 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1128 03:05:47.594379  353369 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1128 03:05:47.594393  353369 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1128 03:05:47.594441  353369 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token p16c4x.lqt94waaplt52x78 --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-112998-m02": (2.895234525s)
	I1128 03:05:47.594463  353369 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1128 03:05:47.849754  353369 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I1128 03:05:47.849814  353369 start.go:306] JoinCluster complete in 3.346140402s
	I1128 03:05:47.849829  353369 cni.go:84] Creating CNI manager for ""
	I1128 03:05:47.849836  353369 cni.go:136] 2 nodes found, recommending kindnet
	I1128 03:05:47.849888  353369 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 03:05:47.855732  353369 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1128 03:05:47.855776  353369 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1128 03:05:47.855784  353369 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1128 03:05:47.855790  353369 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 03:05:47.855796  353369 command_runner.go:130] > Access: 2023-11-28 03:04:12.954160347 +0000
	I1128 03:05:47.855802  353369 command_runner.go:130] > Modify: 2023-11-16 19:19:18.000000000 +0000
	I1128 03:05:47.855807  353369 command_runner.go:130] > Change: 2023-11-28 03:04:11.106160347 +0000
	I1128 03:05:47.855811  353369 command_runner.go:130] >  Birth: -
	I1128 03:05:47.856089  353369 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 03:05:47.856108  353369 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 03:05:47.876099  353369 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 03:05:48.197317  353369 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1128 03:05:48.201769  353369 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1128 03:05:48.206828  353369 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1128 03:05:48.231138  353369 command_runner.go:130] > daemonset.apps/kindnet configured
	I1128 03:05:48.234698  353369 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:05:48.235056  353369 kapi.go:59] client config for multinode-112998: &rest.Config{Host:"https://192.168.39.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 03:05:48.235518  353369 round_trippers.go:463] GET https://192.168.39.73:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1128 03:05:48.235538  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:48.235550  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:48.235559  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:48.238680  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:48.238705  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:48.238715  353369 round_trippers.go:580]     Audit-Id: 9b386ac7-2878-49f1-b82e-bd3c703797c6
	I1128 03:05:48.238723  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:48.238731  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:48.238739  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:48.238747  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:48.238761  353369 round_trippers.go:580]     Content-Length: 291
	I1128 03:05:48.238773  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:48 GMT
	I1128 03:05:48.238800  353369 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"722e10cd-af13-449a-984b-faf3aaa4e33e","resourceVersion":"447","creationTimestamp":"2023-11-28T03:04:44Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1128 03:05:48.238902  353369 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-112998" context rescaled to 1 replicas
	I1128 03:05:48.238939  353369 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.31 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1128 03:05:48.241004  353369 out.go:177] * Verifying Kubernetes components...
	I1128 03:05:48.243118  353369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 03:05:48.256507  353369 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:05:48.256794  353369 kapi.go:59] client config for multinode-112998: &rest.Config{Host:"https://192.168.39.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 03:05:48.257124  353369 node_ready.go:35] waiting up to 6m0s for node "multinode-112998-m02" to be "Ready" ...
	I1128 03:05:48.257219  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:48.257232  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:48.257243  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:48.257254  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:48.260684  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:48.260707  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:48.260721  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:48.260729  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:48.260736  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:48.260747  353369 round_trippers.go:580]     Content-Length: 3530
	I1128 03:05:48.260756  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:48 GMT
	I1128 03:05:48.260764  353369 round_trippers.go:580]     Audit-Id: c234f717-a368-41de-a4e4-c3ae6dc7bc0c
	I1128 03:05:48.260774  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:48.260945  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"508","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1128 03:05:48.261283  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:48.261300  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:48.261311  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:48.261320  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:48.264700  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:48.264723  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:48.264732  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:48.264741  353369 round_trippers.go:580]     Content-Length: 3530
	I1128 03:05:48.264748  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:48 GMT
	I1128 03:05:48.264760  353369 round_trippers.go:580]     Audit-Id: e8f337bb-882f-492d-8616-10810672ddaa
	I1128 03:05:48.264769  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:48.264786  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:48.264798  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:48.264904  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"508","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1128 03:05:48.766009  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:48.766036  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:48.766045  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:48.766052  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:48.769656  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:48.769681  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:48.769691  353369 round_trippers.go:580]     Content-Length: 3530
	I1128 03:05:48.769700  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:48 GMT
	I1128 03:05:48.769707  353369 round_trippers.go:580]     Audit-Id: 68216997-4be4-4a4e-bcfc-71e7cc77bd18
	I1128 03:05:48.769712  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:48.769719  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:48.769727  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:48.769736  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:48.769840  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"508","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1128 03:05:49.265471  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:49.265503  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:49.265512  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:49.265518  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:49.268522  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:49.268555  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:49.268566  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:49 GMT
	I1128 03:05:49.268575  353369 round_trippers.go:580]     Audit-Id: dde2fae9-8773-4f45-804b-7673cb565109
	I1128 03:05:49.268585  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:49.268593  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:49.268604  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:49.268612  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:49.268622  353369 round_trippers.go:580]     Content-Length: 3530
	I1128 03:05:49.268674  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"508","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1128 03:05:49.765945  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:49.765970  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:49.765979  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:49.765985  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:49.769749  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:49.769774  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:49.769785  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:49.769792  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:49.769799  353369 round_trippers.go:580]     Content-Length: 3530
	I1128 03:05:49.769806  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:49 GMT
	I1128 03:05:49.769814  353369 round_trippers.go:580]     Audit-Id: e35f491b-18f7-48b1-ac82-97b5911fd8d7
	I1128 03:05:49.769831  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:49.769844  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:49.769939  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"508","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1128 03:05:50.265595  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:50.265626  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:50.265638  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:50.265644  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:50.269263  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:50.269296  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:50.269306  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:50.269314  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:50.269322  353369 round_trippers.go:580]     Content-Length: 3530
	I1128 03:05:50.269330  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:50 GMT
	I1128 03:05:50.269338  353369 round_trippers.go:580]     Audit-Id: 931982fc-bf00-45b5-8826-c7ec6141250e
	I1128 03:05:50.269351  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:50.269369  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:50.269545  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"508","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1128 03:05:50.269863  353369 node_ready.go:58] node "multinode-112998-m02" has status "Ready":"False"
	I1128 03:05:50.766212  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:50.766239  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:50.766248  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:50.766254  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:50.769179  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:50.769201  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:50.769208  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:50.769214  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:50.769219  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:50.769226  353369 round_trippers.go:580]     Content-Length: 3530
	I1128 03:05:50.769233  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:50 GMT
	I1128 03:05:50.769245  353369 round_trippers.go:580]     Audit-Id: 07f657cd-6ec2-41e8-996d-a4963e1a0b3f
	I1128 03:05:50.769253  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:50.769351  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"508","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1128 03:05:51.265837  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:51.265883  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:51.265892  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:51.265898  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:51.268523  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:51.268553  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:51.268564  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:51.268573  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:51.268582  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:51.268591  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:51.268600  353369 round_trippers.go:580]     Content-Length: 3530
	I1128 03:05:51.268615  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:51 GMT
	I1128 03:05:51.268622  353369 round_trippers.go:580]     Audit-Id: 9ff3ecd8-83ac-423d-9616-b2bc61d3379e
	I1128 03:05:51.268742  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"508","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsTyp
e":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alp [truncated 2506 chars]
	I1128 03:05:51.765749  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:51.765782  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:51.765814  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:51.765820  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:51.768224  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:51.768246  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:51.768254  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:51.768259  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:51.768266  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:51.768272  353369 round_trippers.go:580]     Content-Length: 3639
	I1128 03:05:51.768277  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:51 GMT
	I1128 03:05:51.768282  353369 round_trippers.go:580]     Audit-Id: bb5abbdf-fb89-41f2-acb7-d0eafe7303b1
	I1128 03:05:51.768287  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:51.768371  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"519","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1128 03:05:52.266015  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:52.266048  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:52.266056  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:52.266066  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:52.268674  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:52.268699  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:52.268707  353369 round_trippers.go:580]     Content-Length: 3639
	I1128 03:05:52.268712  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:52 GMT
	I1128 03:05:52.268717  353369 round_trippers.go:580]     Audit-Id: 8e647c3f-259d-48f5-a849-6bb164cbd13f
	I1128 03:05:52.268722  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:52.268727  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:52.268733  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:52.268738  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:52.268783  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"519","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1128 03:05:52.765414  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:52.765446  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:52.765454  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:52.765461  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:52.768452  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:52.768478  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:52.768486  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:52.768492  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:52.768519  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:52.768529  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:52.768536  353369 round_trippers.go:580]     Content-Length: 3639
	I1128 03:05:52.768545  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:52 GMT
	I1128 03:05:52.768555  353369 round_trippers.go:580]     Audit-Id: 77673b0a-69d3-490c-bfbf-1d5f7a12c2c0
	I1128 03:05:52.768649  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"519","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1128 03:05:52.768993  353369 node_ready.go:58] node "multinode-112998-m02" has status "Ready":"False"
	I1128 03:05:53.266162  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:53.266186  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:53.266194  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:53.266211  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:53.270528  353369 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:05:53.270558  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:53.270567  353369 round_trippers.go:580]     Audit-Id: 04193d6c-ffa0-4c89-a280-32e53ec0cc28
	I1128 03:05:53.270573  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:53.270579  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:53.270584  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:53.270601  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:53.270612  353369 round_trippers.go:580]     Content-Length: 3639
	I1128 03:05:53.270621  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:53 GMT
	I1128 03:05:53.270879  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"519","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1128 03:05:53.765566  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:53.765599  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:53.765611  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:53.765619  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:53.769475  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:53.769500  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:53.769511  353369 round_trippers.go:580]     Audit-Id: 18a05fa5-acc2-427a-9815-0b9d1ab39d85
	I1128 03:05:53.769518  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:53.769526  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:53.769534  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:53.769546  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:53.769555  353369 round_trippers.go:580]     Content-Length: 3639
	I1128 03:05:53.769565  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:53 GMT
	I1128 03:05:53.769678  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"519","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1128 03:05:54.266413  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:54.266444  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:54.266457  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:54.266477  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:54.270261  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:54.270291  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:54.270302  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:54.270311  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:54.270319  353369 round_trippers.go:580]     Content-Length: 3639
	I1128 03:05:54.270326  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:54 GMT
	I1128 03:05:54.270334  353369 round_trippers.go:580]     Audit-Id: 6f5d7e5d-b258-4063-9528-2ec60019d178
	I1128 03:05:54.270342  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:54.270355  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:54.270438  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"519","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1128 03:05:54.765416  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:54.765443  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:54.765452  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:54.765459  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:54.768177  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:54.768205  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:54.768214  353369 round_trippers.go:580]     Audit-Id: 71831eab-a566-4a64-8dd4-cd7fe79e1e7a
	I1128 03:05:54.768222  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:54.768230  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:54.768238  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:54.768244  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:54.768251  353369 round_trippers.go:580]     Content-Length: 3639
	I1128 03:05:54.768258  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:54 GMT
	I1128 03:05:54.768357  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"519","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1128 03:05:55.265946  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:55.265988  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:55.265997  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:55.266006  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:55.269445  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:55.269483  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:55.269495  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:55.269507  353369 round_trippers.go:580]     Content-Length: 3639
	I1128 03:05:55.269517  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:55 GMT
	I1128 03:05:55.269526  353369 round_trippers.go:580]     Audit-Id: 97d2b797-37b1-4ed6-8b36-92e3a6c694d9
	I1128 03:05:55.269536  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:55.269546  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:55.269552  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:55.269620  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"519","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1128 03:05:55.269952  353369 node_ready.go:58] node "multinode-112998-m02" has status "Ready":"False"
	I1128 03:05:55.766180  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:55.766211  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:55.766224  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:55.766254  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:55.770997  353369 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:05:55.771030  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:55.771038  353369 round_trippers.go:580]     Content-Length: 3639
	I1128 03:05:55.771061  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:55 GMT
	I1128 03:05:55.771066  353369 round_trippers.go:580]     Audit-Id: f35c4c52-6c48-47e7-9471-8088281ed80b
	I1128 03:05:55.771071  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:55.771076  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:55.771084  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:55.771093  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:55.772609  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"519","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2615 chars]
	I1128 03:05:56.266294  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:56.266321  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:56.266330  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:56.266337  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:56.269185  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:56.269211  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:56.269222  353369 round_trippers.go:580]     Audit-Id: 211a8bc9-3431-4f47-b67c-52d79537e7db
	I1128 03:05:56.269231  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:56.269243  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:56.269250  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:56.269271  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:56.269282  353369 round_trippers.go:580]     Content-Length: 3725
	I1128 03:05:56.269291  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:56 GMT
	I1128 03:05:56.269400  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"535","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2701 chars]
	I1128 03:05:56.269713  353369 node_ready.go:49] node "multinode-112998-m02" has status "Ready":"True"
	I1128 03:05:56.269740  353369 node_ready.go:38] duration metric: took 8.012596116s waiting for node "multinode-112998-m02" to be "Ready" ...
	I1128 03:05:56.269752  353369 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 03:05:56.269845  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods
	I1128 03:05:56.269856  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:56.269865  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:56.269874  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:56.276455  353369 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1128 03:05:56.276477  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:56.276486  353369 round_trippers.go:580]     Audit-Id: 55829d1b-7267-4825-bd42-3596b89a8df4
	I1128 03:05:56.276494  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:56.276501  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:56.276507  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:56.276530  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:56.276540  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:56 GMT
	I1128 03:05:56.278188  353369 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"535"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"443","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67324 chars]
	I1128 03:05:56.281809  353369 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:56.281923  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:05:56.281938  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:56.281949  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:56.281958  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:56.284779  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:56.284799  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:56.284809  353369 round_trippers.go:580]     Audit-Id: d30f4172-ce14-4baa-a629-c6fd4cdc0901
	I1128 03:05:56.284817  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:56.284825  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:56.284833  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:56.284841  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:56.284855  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:56 GMT
	I1128 03:05:56.285127  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"443","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1128 03:05:56.285630  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:56.285652  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:56.285659  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:56.285665  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:56.287584  353369 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:05:56.287599  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:56.287605  353369 round_trippers.go:580]     Audit-Id: 4afb026b-72d7-47ba-886a-9a33b1c51de0
	I1128 03:05:56.287611  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:56.287616  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:56.287621  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:56.287629  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:56.287640  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:56 GMT
	I1128 03:05:56.287782  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:56.288103  353369 pod_ready.go:92] pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace has status "Ready":"True"
	I1128 03:05:56.288118  353369 pod_ready.go:81] duration metric: took 6.273232ms waiting for pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:56.288126  353369 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:56.288177  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-112998
	I1128 03:05:56.288186  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:56.288193  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:56.288199  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:56.290081  353369 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:05:56.290096  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:56.290104  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:56.290113  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:56.290122  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:56.290131  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:56 GMT
	I1128 03:05:56.290146  353369 round_trippers.go:580]     Audit-Id: a254d562-aa3a-49b0-912d-cfd15eafa9e2
	I1128 03:05:56.290152  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:56.290299  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-112998","namespace":"kube-system","uid":"d09c5f66-0756-4402-ae0e-3b10c34e059c","resourceVersion":"408","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.73:2379","kubernetes.io/config.hash":"424bc6684b5cae600504832fd6cb287f","kubernetes.io/config.mirror":"424bc6684b5cae600504832fd6cb287f","kubernetes.io/config.seen":"2023-11-28T03:04:44.384307907Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1128 03:05:56.290715  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:56.290739  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:56.290750  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:56.290764  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:56.292591  353369 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:05:56.292603  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:56.292609  353369 round_trippers.go:580]     Audit-Id: f35578c0-8733-496d-a1a4-b813b72bd11f
	I1128 03:05:56.292615  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:56.292620  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:56.292626  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:56.292631  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:56.292638  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:56 GMT
	I1128 03:05:56.292917  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:56.293291  353369 pod_ready.go:92] pod "etcd-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:05:56.293307  353369 pod_ready.go:81] duration metric: took 5.174216ms waiting for pod "etcd-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:56.293326  353369 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:56.293382  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:05:56.293390  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:56.293400  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:56.293410  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:56.295132  353369 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:05:56.295143  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:56.295149  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:56.295154  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:56 GMT
	I1128 03:05:56.295159  353369 round_trippers.go:580]     Audit-Id: 3ec659dd-f653-46dc-83f9-957e33225bd6
	I1128 03:05:56.295167  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:56.295175  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:56.295187  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:56.295564  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"449","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1128 03:05:56.295920  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:56.295934  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:56.295942  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:56.295949  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:56.297744  353369 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:05:56.297757  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:56.297763  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:56.297768  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:56.297773  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:56.297778  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:56.297785  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:56 GMT
	I1128 03:05:56.297793  353369 round_trippers.go:580]     Audit-Id: 321e8608-6e5d-4d94-8372-f476779da00f
	I1128 03:05:56.298264  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:56.298629  353369 pod_ready.go:92] pod "kube-apiserver-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:05:56.298647  353369 pod_ready.go:81] duration metric: took 5.313747ms waiting for pod "kube-apiserver-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:56.298656  353369 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:56.298706  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-112998
	I1128 03:05:56.298714  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:56.298721  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:56.298727  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:56.300429  353369 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:05:56.300442  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:56.300448  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:56.300453  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:56.300458  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:56.300464  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:56 GMT
	I1128 03:05:56.300474  353369 round_trippers.go:580]     Audit-Id: 62f4e18e-839a-432c-b61a-2a0a3676afbd
	I1128 03:05:56.300484  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:56.300792  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-112998","namespace":"kube-system","uid":"9c108920-a3e5-4377-96a3-97a4538555a0","resourceVersion":"450","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8aad7d6fb2125381c02e5fd8434005a3","kubernetes.io/config.mirror":"8aad7d6fb2125381c02e5fd8434005a3","kubernetes.io/config.seen":"2023-11-28T03:04:44.384314206Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1128 03:05:56.301203  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:56.301220  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:56.301227  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:56.301232  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:56.305279  353369 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:05:56.305299  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:56.305310  353369 round_trippers.go:580]     Audit-Id: 3669b67e-c754-4a20-bc17-a58d1993d5c2
	I1128 03:05:56.305321  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:56.305329  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:56.305340  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:56.305356  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:56.305366  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:56 GMT
	I1128 03:05:56.305614  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:56.305973  353369 pod_ready.go:92] pod "kube-controller-manager-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:05:56.305993  353369 pod_ready.go:81] duration metric: took 7.330913ms waiting for pod "kube-controller-manager-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:56.306004  353369 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bmr6b" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:56.466412  353369 request.go:629] Waited for 160.3057ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmr6b
	I1128 03:05:56.466502  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmr6b
	I1128 03:05:56.466514  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:56.466526  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:56.466536  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:56.469735  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:56.469762  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:56.469770  353369 round_trippers.go:580]     Audit-Id: 45cc3f42-692f-4100-a040-c3405421ce48
	I1128 03:05:56.469776  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:56.469781  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:56.469786  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:56.469792  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:56.469799  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:56 GMT
	I1128 03:05:56.470053  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bmr6b","generateName":"kube-proxy-","namespace":"kube-system","uid":"0d9b86f2-025d-424d-a66f-ad3255685aca","resourceVersion":"413","creationTimestamp":"2023-11-28T03:04:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1128 03:05:56.666668  353369 request.go:629] Waited for 196.146477ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:56.666728  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:56.666733  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:56.666740  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:56.666747  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:56.669466  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:56.669488  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:56.669498  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:56.669505  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:56.669512  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:56.669520  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:56.669529  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:56 GMT
	I1128 03:05:56.669541  353369 round_trippers.go:580]     Audit-Id: 85f4ce51-cc54-484e-96ed-8fc5dc694c0c
	I1128 03:05:56.669735  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:56.670106  353369 pod_ready.go:92] pod "kube-proxy-bmr6b" in "kube-system" namespace has status "Ready":"True"
	I1128 03:05:56.670125  353369 pod_ready.go:81] duration metric: took 364.112091ms waiting for pod "kube-proxy-bmr6b" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:56.670137  353369 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jgxjs" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:56.866592  353369 request.go:629] Waited for 196.365883ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgxjs
	I1128 03:05:56.866661  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgxjs
	I1128 03:05:56.866667  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:56.866679  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:56.866689  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:56.870687  353369 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:05:56.870715  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:56.870727  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:56.870735  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:56 GMT
	I1128 03:05:56.870742  353369 round_trippers.go:580]     Audit-Id: 506e139c-adf1-470f-ae85-1660c1ce4340
	I1128 03:05:56.870749  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:56.870758  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:56.870770  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:56.871337  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jgxjs","generateName":"kube-proxy-","namespace":"kube-system","uid":"d8ea73b8-f8e1-4e14-b9cd-4da515a90b3d","resourceVersion":"521","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1128 03:05:57.067157  353369 request.go:629] Waited for 195.392794ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:57.067233  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:05:57.067237  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:57.067250  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:57.067259  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:57.070119  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:57.070144  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:57.070157  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:57 GMT
	I1128 03:05:57.070166  353369 round_trippers.go:580]     Audit-Id: b1b50f89-78a2-40fc-a569-cc6fe29d2f53
	I1128 03:05:57.070173  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:57.070180  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:57.070187  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:57.070197  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:57.070206  353369 round_trippers.go:580]     Content-Length: 3605
	I1128 03:05:57.070273  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"536","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 2581 chars]
	I1128 03:05:57.070543  353369 pod_ready.go:92] pod "kube-proxy-jgxjs" in "kube-system" namespace has status "Ready":"True"
	I1128 03:05:57.070560  353369 pod_ready.go:81] duration metric: took 400.40496ms waiting for pod "kube-proxy-jgxjs" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:57.070572  353369 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:57.267005  353369 request.go:629] Waited for 196.361136ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-112998
	I1128 03:05:57.267098  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-112998
	I1128 03:05:57.267104  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:57.267117  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:57.267127  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:57.269644  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:57.269664  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:57.269670  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:57.269676  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:57.269681  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:57.269686  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:57.269691  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:57 GMT
	I1128 03:05:57.269697  353369 round_trippers.go:580]     Audit-Id: 470fb9d5-98ae-4e0c-9cc7-7d8d4ad56a98
	I1128 03:05:57.270038  353369 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-112998","namespace":"kube-system","uid":"b32dbcd4-76a8-4b87-b7d8-701f78a8285f","resourceVersion":"448","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"49372038efccb5b42d91203468562dfb","kubernetes.io/config.mirror":"49372038efccb5b42d91203468562dfb","kubernetes.io/config.seen":"2023-11-28T03:04:44.384315431Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1128 03:05:57.466793  353369 request.go:629] Waited for 196.38138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:57.466858  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:05:57.466863  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:57.466871  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:57.466877  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:57.469589  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:57.469613  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:57.469621  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:57.469632  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:57.469641  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:57 GMT
	I1128 03:05:57.469649  353369 round_trippers.go:580]     Audit-Id: 085ffe99-cd13-4ccf-86e1-a7a293fb800f
	I1128 03:05:57.469655  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:57.469661  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:57.470042  353369 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5898 chars]
	I1128 03:05:57.470356  353369 pod_ready.go:92] pod "kube-scheduler-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:05:57.470370  353369 pod_ready.go:81] duration metric: took 399.790873ms waiting for pod "kube-scheduler-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:05:57.470380  353369 pod_ready.go:38] duration metric: took 1.200613309s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 03:05:57.470398  353369 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 03:05:57.470444  353369 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 03:05:57.484352  353369 system_svc.go:56] duration metric: took 13.944793ms WaitForService to wait for kubelet.
	I1128 03:05:57.484384  353369 kubeadm.go:581] duration metric: took 9.24541892s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 03:05:57.484404  353369 node_conditions.go:102] verifying NodePressure condition ...
	I1128 03:05:57.666807  353369 request.go:629] Waited for 182.325757ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes
	I1128 03:05:57.666894  353369 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes
	I1128 03:05:57.666901  353369 round_trippers.go:469] Request Headers:
	I1128 03:05:57.666912  353369 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:05:57.666927  353369 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:05:57.669802  353369 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:05:57.669830  353369 round_trippers.go:577] Response Headers:
	I1128 03:05:57.669840  353369 round_trippers.go:580]     Audit-Id: d0f27b41-3efe-4164-be40-0d12755527f7
	I1128 03:05:57.669849  353369 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:05:57.669859  353369 round_trippers.go:580]     Content-Type: application/json
	I1128 03:05:57.669868  353369 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:05:57.669876  353369 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:05:57.669884  353369 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:05:57 GMT
	I1128 03:05:57.670092  353369 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"538"},"items":[{"metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"423","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9524 chars]
	I1128 03:05:57.670694  353369 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 03:05:57.670726  353369 node_conditions.go:123] node cpu capacity is 2
	I1128 03:05:57.670739  353369 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 03:05:57.670751  353369 node_conditions.go:123] node cpu capacity is 2
	I1128 03:05:57.670757  353369 node_conditions.go:105] duration metric: took 186.348122ms to run NodePressure ...
	I1128 03:05:57.670775  353369 start.go:228] waiting for startup goroutines ...
	I1128 03:05:57.670808  353369 start.go:242] writing updated cluster config ...
	I1128 03:05:57.671250  353369 ssh_runner.go:195] Run: rm -f paused
	I1128 03:05:57.720590  353369 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 03:05:57.723989  353369 out.go:177] * Done! kubectl is now configured to use "multinode-112998" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 03:04:11 UTC, ends at Tue 2023-11-28 03:06:04 UTC. --
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.315360083Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bfaa8631-6b38-4b69-bced-1700c2cc4d63 name=/runtime.v1.RuntimeService/Version
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.316899142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a067cccd-287d-4b1d-a98e-d3c60898b1da name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.317465413Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701140764317448015,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a067cccd-287d-4b1d-a98e-d3c60898b1da name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.318081556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cb9c0134-a4a0-4325-a730-bbe8439d5c43 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.318233256Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cb9c0134-a4a0-4325-a730-bbe8439d5c43 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.318440954Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:311e4ad871a10a08c547f9f22aa420b4972111f7963b878860739bdf9613b3a0,PodSandboxId:61072254cbb550a65f32bc3ff27f0e32eb751c8aaa35105188b5849a3ea6e82d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701140760208666267,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmx8j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7feaf891-161d-47cb-842c-1357fb63956c,},Annotations:map[string]string{io.kubernetes.container.hash: 571598c0,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73f34ac99ee6d0990824b819cb893ae75e93377539251ddc49098dd954072d89,PodSandboxId:a87935dc1880987cc06c46bf92714abe6ea05f48ac48cd3bb686b0228c154926,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701140702941363137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sd64m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d5cae9f-6647-42f9-a8e7-1f14dc9fa422,},Annotations:map[string]string{io.kubernetes.container.hash: 689e676b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eced9a6d1dd1684dcc2e43ab8fe96ce5fb58eb2a73df7e5fbe077dd81233cfdd,PodSandboxId:02f39f1945002adba11c7bf91fb21960a7ece00706651af8bdcafd7489fcd419,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701140702677682833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 80d85aa0-5ee8-48db-a570-fdde6138e079,},Annotations:map[string]string{io.kubernetes.container.hash: 4d1e43e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b1f0a8fe80901b2de7ba834e734a0177fc9ed6e921280c7fd196e61fa333562,PodSandboxId:89581aaed7e11e03a5604f697faa6c2d094b771918ec915257e89611cc624d8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701140700158898524,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5pfcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 370f4bc7-f3dd-456e-b67a-fff569e42ac1,},Annotations:map[string]string{io.kubernetes.container.hash: d2bcb8b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028a26d9be2d5d6a7e05bb89b06aee732c94aeecf3d30642e0fbd1170736f9e1,PodSandboxId:571003bbe8cdde6095ff7e6874a9b3796fc2bb4687de00202010d016e51486d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701140698055817777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9b86f2-025d-424d-a66f-ad3255
685aca,},Annotations:map[string]string{io.kubernetes.container.hash: e26949d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c12c2a575d5a4b3726652c2920bb5cec1eb4b5db2dc1f19625a83430855f19,PodSandboxId:40ab8eeb75a7243a61721e107fd2b10c35341eef251c25acbd67b560c977ed55,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701140677746242675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424bc6684b5cae600504832fd6cb287f,},Annotations:map[string]string{io.kubernetes
.container.hash: fdf50157,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc79cc3a790537e1d12c89615ba54f93edc29318cf03be7947345123f97fcc6a,PodSandboxId:93b3be9dfa70f9cfe05c6e9187ecdd91043126f13db139e4e65ff2a315828a03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701140677654598190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49372038efccb5b42d91203468562dfb,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84e423fed8a3554c225795c8e9d336bd5e0a19cc1d82765035789b56db036a16,PodSandboxId:c566d1d1aeb986c3600991480eeff210bea2396d68fa8d9764eac81dbbafc7f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701140677481384267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aad7d6fb2125381c02e5fd8434005a3,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e770ed13f86210c2d4b5b91717591f2c9f166049855e5167e949d596ea038ac0,PodSandboxId:a5cde0236d3cf34e20d12d8cfa2a2064ae261ff3b3a0699cdbbdaa7ee41289a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701140677350038839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38601fa395350043ca26b7c11be4397,},Annotations:map[string]string{io.kubernetes
.container.hash: 461cc332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cb9c0134-a4a0-4325-a730-bbe8439d5c43 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.362462416Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8fb0b391-a8e8-491a-9643-840caf1a890e name=/runtime.v1.RuntimeService/Version
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.362552221Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8fb0b391-a8e8-491a-9643-840caf1a890e name=/runtime.v1.RuntimeService/Version
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.365053647Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d21c3e35-0224-4f8d-b0f9-6ed8ae1319cb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.365539830Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701140764365524489,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d21c3e35-0224-4f8d-b0f9-6ed8ae1319cb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.366495887Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2bfba302-ffad-48aa-b01c-42d04ecb68fe name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.366567201Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2bfba302-ffad-48aa-b01c-42d04ecb68fe name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.366842841Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:311e4ad871a10a08c547f9f22aa420b4972111f7963b878860739bdf9613b3a0,PodSandboxId:61072254cbb550a65f32bc3ff27f0e32eb751c8aaa35105188b5849a3ea6e82d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701140760208666267,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmx8j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7feaf891-161d-47cb-842c-1357fb63956c,},Annotations:map[string]string{io.kubernetes.container.hash: 571598c0,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73f34ac99ee6d0990824b819cb893ae75e93377539251ddc49098dd954072d89,PodSandboxId:a87935dc1880987cc06c46bf92714abe6ea05f48ac48cd3bb686b0228c154926,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701140702941363137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sd64m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d5cae9f-6647-42f9-a8e7-1f14dc9fa422,},Annotations:map[string]string{io.kubernetes.container.hash: 689e676b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eced9a6d1dd1684dcc2e43ab8fe96ce5fb58eb2a73df7e5fbe077dd81233cfdd,PodSandboxId:02f39f1945002adba11c7bf91fb21960a7ece00706651af8bdcafd7489fcd419,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701140702677682833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 80d85aa0-5ee8-48db-a570-fdde6138e079,},Annotations:map[string]string{io.kubernetes.container.hash: 4d1e43e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b1f0a8fe80901b2de7ba834e734a0177fc9ed6e921280c7fd196e61fa333562,PodSandboxId:89581aaed7e11e03a5604f697faa6c2d094b771918ec915257e89611cc624d8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701140700158898524,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5pfcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 370f4bc7-f3dd-456e-b67a-fff569e42ac1,},Annotations:map[string]string{io.kubernetes.container.hash: d2bcb8b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028a26d9be2d5d6a7e05bb89b06aee732c94aeecf3d30642e0fbd1170736f9e1,PodSandboxId:571003bbe8cdde6095ff7e6874a9b3796fc2bb4687de00202010d016e51486d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701140698055817777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9b86f2-025d-424d-a66f-ad3255
685aca,},Annotations:map[string]string{io.kubernetes.container.hash: e26949d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c12c2a575d5a4b3726652c2920bb5cec1eb4b5db2dc1f19625a83430855f19,PodSandboxId:40ab8eeb75a7243a61721e107fd2b10c35341eef251c25acbd67b560c977ed55,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701140677746242675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424bc6684b5cae600504832fd6cb287f,},Annotations:map[string]string{io.kubernetes
.container.hash: fdf50157,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc79cc3a790537e1d12c89615ba54f93edc29318cf03be7947345123f97fcc6a,PodSandboxId:93b3be9dfa70f9cfe05c6e9187ecdd91043126f13db139e4e65ff2a315828a03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701140677654598190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49372038efccb5b42d91203468562dfb,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84e423fed8a3554c225795c8e9d336bd5e0a19cc1d82765035789b56db036a16,PodSandboxId:c566d1d1aeb986c3600991480eeff210bea2396d68fa8d9764eac81dbbafc7f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701140677481384267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aad7d6fb2125381c02e5fd8434005a3,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e770ed13f86210c2d4b5b91717591f2c9f166049855e5167e949d596ea038ac0,PodSandboxId:a5cde0236d3cf34e20d12d8cfa2a2064ae261ff3b3a0699cdbbdaa7ee41289a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701140677350038839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38601fa395350043ca26b7c11be4397,},Annotations:map[string]string{io.kubernetes
.container.hash: 461cc332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2bfba302-ffad-48aa-b01c-42d04ecb68fe name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.398948154Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a9c69b6d-fdb0-41e2-8630-4bb85bb569e4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.399329966Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:61072254cbb550a65f32bc3ff27f0e32eb751c8aaa35105188b5849a3ea6e82d,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-pmx8j,Uid:7feaf891-161d-47cb-842c-1357fb63956c,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701140758835972604,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-pmx8j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7feaf891-161d-47cb-842c-1357fb63956c,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T03:05:58.493274836Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a87935dc1880987cc06c46bf92714abe6ea05f48ac48cd3bb686b0228c154926,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-sd64m,Uid:0d5cae9f-6647-42f9-a8e7-1f14dc9fa422,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1701140702226867913,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-sd64m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d5cae9f-6647-42f9-a8e7-1f14dc9fa422,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T03:05:01.885028195Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:02f39f1945002adba11c7bf91fb21960a7ece00706651af8bdcafd7489fcd419,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:80d85aa0-5ee8-48db-a570-fdde6138e079,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701140702221803687,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80d85aa0-5ee8-48db-a570-fdde6138e079,},Annotations:map[string]st
ring{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-11-28T03:05:01.878516869Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:89581aaed7e11e03a5604f697faa6c2d094b771918ec915257e89611cc624d8e,Metadata:&PodSandboxMetadata{Name:kindnet-5pfcd,Uid:370f4bc7-f3dd-456e-b67a-fff569e42ac1,Namespace:kube-system,Attem
pt:0,},State:SANDBOX_READY,CreatedAt:1701140697186884498,Labels:map[string]string{app: kindnet,controller-revision-hash: 5666b6c4d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-5pfcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 370f4bc7-f3dd-456e-b67a-fff569e42ac1,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T03:04:56.237764697Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:571003bbe8cdde6095ff7e6874a9b3796fc2bb4687de00202010d016e51486d3,Metadata:&PodSandboxMetadata{Name:kube-proxy-bmr6b,Uid:0d9b86f2-025d-424d-a66f-ad3255685aca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701140697182394138,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-bmr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9b86f2-025d-424d-a66f-ad3255685aca,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T03:04:56.222607833Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a5cde0236d3cf34e20d12d8cfa2a2064ae261ff3b3a0699cdbbdaa7ee41289a5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-multinode-112998,Uid:f38601fa395350043ca26b7c11be4397,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701140676831073569,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38601fa395350043ca26b7c11be4397,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.73:8443,kubernetes.io/config.hash: f38601fa395350043ca26b7c11be4397,kubernetes.io/config.seen: 2023-11-28T03:04:36.287804763Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c566d1d1aeb986c3600991480eeff210be
a2396d68fa8d9764eac81dbbafc7f8,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-112998,Uid:8aad7d6fb2125381c02e5fd8434005a3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701140676819332513,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aad7d6fb2125381c02e5fd8434005a3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8aad7d6fb2125381c02e5fd8434005a3,kubernetes.io/config.seen: 2023-11-28T03:04:36.287805839Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:40ab8eeb75a7243a61721e107fd2b10c35341eef251c25acbd67b560c977ed55,Metadata:&PodSandboxMetadata{Name:etcd-multinode-112998,Uid:424bc6684b5cae600504832fd6cb287f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701140676814316301,Labels:map[string]string{component: etcd,io.kubernetes.conta
iner.name: POD,io.kubernetes.pod.name: etcd-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424bc6684b5cae600504832fd6cb287f,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.73:2379,kubernetes.io/config.hash: 424bc6684b5cae600504832fd6cb287f,kubernetes.io/config.seen: 2023-11-28T03:04:36.287800551Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:93b3be9dfa70f9cfe05c6e9187ecdd91043126f13db139e4e65ff2a315828a03,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-112998,Uid:49372038efccb5b42d91203468562dfb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701140676803489093,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49372038efccb5b42d91203468562dfb,tier: control-plane,},Annotations:map[string]string{kubernet
es.io/config.hash: 49372038efccb5b42d91203468562dfb,kubernetes.io/config.seen: 2023-11-28T03:04:36.287806807Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=a9c69b6d-fdb0-41e2-8630-4bb85bb569e4 name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.400129655Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=92d8be31-f4eb-4759-9bd1-25061b82cb3d name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.400276064Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=92d8be31-f4eb-4759-9bd1-25061b82cb3d name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.400553436Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:311e4ad871a10a08c547f9f22aa420b4972111f7963b878860739bdf9613b3a0,PodSandboxId:61072254cbb550a65f32bc3ff27f0e32eb751c8aaa35105188b5849a3ea6e82d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701140760208666267,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmx8j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7feaf891-161d-47cb-842c-1357fb63956c,},Annotations:map[string]string{io.kubernetes.container.hash: 571598c0,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73f34ac99ee6d0990824b819cb893ae75e93377539251ddc49098dd954072d89,PodSandboxId:a87935dc1880987cc06c46bf92714abe6ea05f48ac48cd3bb686b0228c154926,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701140702941363137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sd64m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d5cae9f-6647-42f9-a8e7-1f14dc9fa422,},Annotations:map[string]string{io.kubernetes.container.hash: 689e676b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eced9a6d1dd1684dcc2e43ab8fe96ce5fb58eb2a73df7e5fbe077dd81233cfdd,PodSandboxId:02f39f1945002adba11c7bf91fb21960a7ece00706651af8bdcafd7489fcd419,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701140702677682833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 80d85aa0-5ee8-48db-a570-fdde6138e079,},Annotations:map[string]string{io.kubernetes.container.hash: 4d1e43e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b1f0a8fe80901b2de7ba834e734a0177fc9ed6e921280c7fd196e61fa333562,PodSandboxId:89581aaed7e11e03a5604f697faa6c2d094b771918ec915257e89611cc624d8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701140700158898524,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5pfcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 370f4bc7-f3dd-456e-b67a-fff569e42ac1,},Annotations:map[string]string{io.kubernetes.container.hash: d2bcb8b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028a26d9be2d5d6a7e05bb89b06aee732c94aeecf3d30642e0fbd1170736f9e1,PodSandboxId:571003bbe8cdde6095ff7e6874a9b3796fc2bb4687de00202010d016e51486d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701140698055817777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9b86f2-025d-424d-a66f-ad3255
685aca,},Annotations:map[string]string{io.kubernetes.container.hash: e26949d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c12c2a575d5a4b3726652c2920bb5cec1eb4b5db2dc1f19625a83430855f19,PodSandboxId:40ab8eeb75a7243a61721e107fd2b10c35341eef251c25acbd67b560c977ed55,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701140677746242675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424bc6684b5cae600504832fd6cb287f,},Annotations:map[string]string{io.kubernetes
.container.hash: fdf50157,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc79cc3a790537e1d12c89615ba54f93edc29318cf03be7947345123f97fcc6a,PodSandboxId:93b3be9dfa70f9cfe05c6e9187ecdd91043126f13db139e4e65ff2a315828a03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701140677654598190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49372038efccb5b42d91203468562dfb,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84e423fed8a3554c225795c8e9d336bd5e0a19cc1d82765035789b56db036a16,PodSandboxId:c566d1d1aeb986c3600991480eeff210bea2396d68fa8d9764eac81dbbafc7f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701140677481384267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aad7d6fb2125381c02e5fd8434005a3,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e770ed13f86210c2d4b5b91717591f2c9f166049855e5167e949d596ea038ac0,PodSandboxId:a5cde0236d3cf34e20d12d8cfa2a2064ae261ff3b3a0699cdbbdaa7ee41289a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701140677350038839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38601fa395350043ca26b7c11be4397,},Annotations:map[string]string{io.kubernetes
.container.hash: 461cc332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=92d8be31-f4eb-4759-9bd1-25061b82cb3d name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.412251609Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a717363f-4311-41ef-b19a-1552677edd71 name=/runtime.v1.RuntimeService/Version
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.412319327Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a717363f-4311-41ef-b19a-1552677edd71 name=/runtime.v1.RuntimeService/Version
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.413626800Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=eee60513-ebfd-4a66-98b4-7d717ef2780a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.414720471Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701140764414632247,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=eee60513-ebfd-4a66-98b4-7d717ef2780a name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.415620832Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9560c586-b284-4ad8-b000-b93b6a893007 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.415691534Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9560c586-b284-4ad8-b000-b93b6a893007 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:06:04 multinode-112998 crio[717]: time="2023-11-28 03:06:04.415884180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:311e4ad871a10a08c547f9f22aa420b4972111f7963b878860739bdf9613b3a0,PodSandboxId:61072254cbb550a65f32bc3ff27f0e32eb751c8aaa35105188b5849a3ea6e82d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701140760208666267,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmx8j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7feaf891-161d-47cb-842c-1357fb63956c,},Annotations:map[string]string{io.kubernetes.container.hash: 571598c0,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73f34ac99ee6d0990824b819cb893ae75e93377539251ddc49098dd954072d89,PodSandboxId:a87935dc1880987cc06c46bf92714abe6ea05f48ac48cd3bb686b0228c154926,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701140702941363137,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sd64m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d5cae9f-6647-42f9-a8e7-1f14dc9fa422,},Annotations:map[string]string{io.kubernetes.container.hash: 689e676b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eced9a6d1dd1684dcc2e43ab8fe96ce5fb58eb2a73df7e5fbe077dd81233cfdd,PodSandboxId:02f39f1945002adba11c7bf91fb21960a7ece00706651af8bdcafd7489fcd419,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701140702677682833,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 80d85aa0-5ee8-48db-a570-fdde6138e079,},Annotations:map[string]string{io.kubernetes.container.hash: 4d1e43e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b1f0a8fe80901b2de7ba834e734a0177fc9ed6e921280c7fd196e61fa333562,PodSandboxId:89581aaed7e11e03a5604f697faa6c2d094b771918ec915257e89611cc624d8e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701140700158898524,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5pfcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 370f4bc7-f3dd-456e-b67a-fff569e42ac1,},Annotations:map[string]string{io.kubernetes.container.hash: d2bcb8b6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:028a26d9be2d5d6a7e05bb89b06aee732c94aeecf3d30642e0fbd1170736f9e1,PodSandboxId:571003bbe8cdde6095ff7e6874a9b3796fc2bb4687de00202010d016e51486d3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701140698055817777,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9b86f2-025d-424d-a66f-ad3255
685aca,},Annotations:map[string]string{io.kubernetes.container.hash: e26949d4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f8c12c2a575d5a4b3726652c2920bb5cec1eb4b5db2dc1f19625a83430855f19,PodSandboxId:40ab8eeb75a7243a61721e107fd2b10c35341eef251c25acbd67b560c977ed55,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701140677746242675,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424bc6684b5cae600504832fd6cb287f,},Annotations:map[string]string{io.kubernetes
.container.hash: fdf50157,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc79cc3a790537e1d12c89615ba54f93edc29318cf03be7947345123f97fcc6a,PodSandboxId:93b3be9dfa70f9cfe05c6e9187ecdd91043126f13db139e4e65ff2a315828a03,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701140677654598190,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49372038efccb5b42d91203468562dfb,},Annotations:map[string]string{io.kubernetes.container.h
ash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:84e423fed8a3554c225795c8e9d336bd5e0a19cc1d82765035789b56db036a16,PodSandboxId:c566d1d1aeb986c3600991480eeff210bea2396d68fa8d9764eac81dbbafc7f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701140677481384267,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aad7d6fb2125381c02e5fd8434005a3,},Annotations:map[string]string{i
o.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e770ed13f86210c2d4b5b91717591f2c9f166049855e5167e949d596ea038ac0,PodSandboxId:a5cde0236d3cf34e20d12d8cfa2a2064ae261ff3b3a0699cdbbdaa7ee41289a5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701140677350038839,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38601fa395350043ca26b7c11be4397,},Annotations:map[string]string{io.kubernetes
.container.hash: 461cc332,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9560c586-b284-4ad8-b000-b93b6a893007 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	311e4ad871a10       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 seconds ago        Running             busybox                   0                   61072254cbb55       busybox-5bc68d56bd-pmx8j
	73f34ac99ee6d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   0                   a87935dc18809       coredns-5dd5756b68-sd64m
	eced9a6d1dd16       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       0                   02f39f1945002       storage-provisioner
	6b1f0a8fe8090       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      About a minute ago   Running             kindnet-cni               0                   89581aaed7e11       kindnet-5pfcd
	028a26d9be2d5       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                0                   571003bbe8cdd       kube-proxy-bmr6b
	f8c12c2a575d5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   40ab8eeb75a72       etcd-multinode-112998
	dc79cc3a79053       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            0                   93b3be9dfa70f       kube-scheduler-multinode-112998
	84e423fed8a35       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   0                   c566d1d1aeb98       kube-controller-manager-multinode-112998
	e770ed13f8621       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            0                   a5cde0236d3cf       kube-apiserver-multinode-112998
	
	* 
	* ==> coredns [73f34ac99ee6d0990824b819cb893ae75e93377539251ddc49098dd954072d89] <==
	* [INFO] 10.244.1.2:46017 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000109396s
	[INFO] 10.244.0.3:40038 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129037s
	[INFO] 10.244.0.3:35363 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002022474s
	[INFO] 10.244.0.3:45804 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000804s
	[INFO] 10.244.0.3:41407 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000073081s
	[INFO] 10.244.0.3:54301 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001408223s
	[INFO] 10.244.0.3:54845 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00006563s
	[INFO] 10.244.0.3:57750 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00006787s
	[INFO] 10.244.0.3:43185 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000060602s
	[INFO] 10.244.1.2:39444 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000158853s
	[INFO] 10.244.1.2:49073 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000123932s
	[INFO] 10.244.1.2:48131 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000282528s
	[INFO] 10.244.1.2:54708 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00012317s
	[INFO] 10.244.0.3:42406 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100729s
	[INFO] 10.244.0.3:45371 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00015043s
	[INFO] 10.244.0.3:54872 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00017659s
	[INFO] 10.244.0.3:44369 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010174s
	[INFO] 10.244.1.2:59610 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113812s
	[INFO] 10.244.1.2:57446 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000193518s
	[INFO] 10.244.1.2:41653 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000173179s
	[INFO] 10.244.1.2:58975 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000147124s
	[INFO] 10.244.0.3:45987 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000111203s
	[INFO] 10.244.0.3:36924 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000080503s
	[INFO] 10.244.0.3:33334 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000070098s
	[INFO] 10.244.0.3:46007 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000072801s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-112998
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-112998
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=multinode-112998
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T03_04_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 03:04:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-112998
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 03:05:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 03:05:01 +0000   Tue, 28 Nov 2023 03:04:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 03:05:01 +0000   Tue, 28 Nov 2023 03:04:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 03:05:01 +0000   Tue, 28 Nov 2023 03:04:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 03:05:01 +0000   Tue, 28 Nov 2023 03:05:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    multinode-112998
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 1bda6ed0d564437f8712556bb0d814ca
	  System UUID:                1bda6ed0-d564-437f-8712-556bb0d814ca
	  Boot ID:                    94f5c591-19f6-4bf1-8c83-9fd628f0a5e4
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-pmx8j                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 coredns-5dd5756b68-sd64m                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     67s
	  kube-system                 etcd-multinode-112998                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         80s
	  kube-system                 kindnet-5pfcd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      68s
	  kube-system                 kube-apiserver-multinode-112998             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-controller-manager-multinode-112998    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-proxy-bmr6b                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         68s
	  kube-system                 kube-scheduler-multinode-112998             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         67s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 66s                kube-proxy       
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  88s (x8 over 88s)  kubelet          Node multinode-112998 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x8 over 88s)  kubelet          Node multinode-112998 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x7 over 88s)  kubelet          Node multinode-112998 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 80s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  80s                kubelet          Node multinode-112998 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    80s                kubelet          Node multinode-112998 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     80s                kubelet          Node multinode-112998 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  80s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           68s                node-controller  Node multinode-112998 event: Registered Node multinode-112998 in Controller
	  Normal  NodeReady                63s                kubelet          Node multinode-112998 status is now: NodeReady
	
	
	Name:               multinode-112998-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-112998-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 03:05:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-112998-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 03:05:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 03:05:56 +0000   Tue, 28 Nov 2023 03:05:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 03:05:56 +0000   Tue, 28 Nov 2023 03:05:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 03:05:56 +0000   Tue, 28 Nov 2023 03:05:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 03:05:56 +0000   Tue, 28 Nov 2023 03:05:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.31
	  Hostname:    multinode-112998-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 47452ea11cdf4a348286be6a25ec050b
	  System UUID:                47452ea1-1cdf-4a34-8286-be6a25ec050b
	  Boot ID:                    bacfa77d-60b4-4117-96a9-be81f94f3280
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-cbjtg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  kube-system                 kindnet-v2g52               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17s
	  kube-system                 kube-proxy-jgxjs            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13s                kube-proxy       
	  Normal  NodeHasSufficientMemory  17s (x5 over 19s)  kubelet          Node multinode-112998-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17s (x5 over 19s)  kubelet          Node multinode-112998-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17s (x5 over 19s)  kubelet          Node multinode-112998-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13s                node-controller  Node multinode-112998-m02 event: Registered Node multinode-112998-m02 in Controller
	  Normal  NodeReady                8s                 kubelet          Node multinode-112998-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Nov28 03:04] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069738] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.362882] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.408781] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150086] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.023873] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.938978] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.106316] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.148871] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.107555] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.218705] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[  +8.696773] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[  +8.267614] systemd-fstab-generator[1264]: Ignoring "noauto" for root device
	[Nov28 03:05] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [f8c12c2a575d5a4b3726652c2920bb5cec1eb4b5db2dc1f19625a83430855f19] <==
	* {"level":"info","ts":"2023-11-28T03:04:39.191914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 switched to configuration voters=(2412776101401756344)"}
	{"level":"info","ts":"2023-11-28T03:04:39.192115Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"97141299b087eff6","local-member-id":"217be714ae9a82b8","added-peer-id":"217be714ae9a82b8","added-peer-peer-urls":["https://192.168.39.73:2380"]}
	{"level":"info","ts":"2023-11-28T03:04:39.194957Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-28T03:04:39.196511Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"217be714ae9a82b8","initial-advertise-peer-urls":["https://192.168.39.73:2380"],"listen-peer-urls":["https://192.168.39.73:2380"],"advertise-client-urls":["https://192.168.39.73:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.73:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-28T03:04:39.196573Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-28T03:04:39.196654Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2023-11-28T03:04:39.19669Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2023-11-28T03:04:39.320099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-28T03:04:39.320264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-28T03:04:39.320313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 received MsgPreVoteResp from 217be714ae9a82b8 at term 1"}
	{"level":"info","ts":"2023-11-28T03:04:39.320347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became candidate at term 2"}
	{"level":"info","ts":"2023-11-28T03:04:39.320375Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 received MsgVoteResp from 217be714ae9a82b8 at term 2"}
	{"level":"info","ts":"2023-11-28T03:04:39.320402Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became leader at term 2"}
	{"level":"info","ts":"2023-11-28T03:04:39.320428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 217be714ae9a82b8 elected leader 217be714ae9a82b8 at term 2"}
	{"level":"info","ts":"2023-11-28T03:04:39.324467Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T03:04:39.32475Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"217be714ae9a82b8","local-member-attributes":"{Name:multinode-112998 ClientURLs:[https://192.168.39.73:2379]}","request-path":"/0/members/217be714ae9a82b8/attributes","cluster-id":"97141299b087eff6","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-28T03:04:39.327257Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"97141299b087eff6","local-member-id":"217be714ae9a82b8","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T03:04:39.327363Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T03:04:39.327403Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T03:04:39.32743Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T03:04:39.331401Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T03:04:39.331677Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T03:04:39.332863Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.73:2379"}
	{"level":"info","ts":"2023-11-28T03:04:39.333386Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T03:04:39.333425Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  03:06:04 up 2 min,  0 users,  load average: 0.59, 0.35, 0.14
	Linux multinode-112998 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [6b1f0a8fe80901b2de7ba834e734a0177fc9ed6e921280c7fd196e61fa333562] <==
	* I1128 03:05:00.913980       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I1128 03:05:00.914556       1 main.go:107] hostIP = 192.168.39.73
	podIP = 192.168.39.73
	I1128 03:05:00.915610       1 main.go:116] setting mtu 1500 for CNI 
	I1128 03:05:00.915649       1 main.go:146] kindnetd IP family: "ipv4"
	I1128 03:05:00.915751       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I1128 03:05:01.510591       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I1128 03:05:01.510680       1 main.go:227] handling current node
	I1128 03:05:11.524420       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I1128 03:05:11.524469       1 main.go:227] handling current node
	I1128 03:05:21.529299       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I1128 03:05:21.529319       1 main.go:227] handling current node
	I1128 03:05:31.543572       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I1128 03:05:31.543629       1 main.go:227] handling current node
	I1128 03:05:41.549127       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I1128 03:05:41.549281       1 main.go:227] handling current node
	I1128 03:05:51.558632       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I1128 03:05:51.558678       1 main.go:227] handling current node
	I1128 03:05:51.558690       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I1128 03:05:51.558696       1 main.go:250] Node multinode-112998-m02 has CIDR [10.244.1.0/24] 
	I1128 03:05:51.559060       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.31 Flags: [] Table: 0} 
	I1128 03:06:01.565987       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I1128 03:06:01.566054       1 main.go:227] handling current node
	I1128 03:06:01.566081       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I1128 03:06:01.566087       1 main.go:250] Node multinode-112998-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [e770ed13f86210c2d4b5b91717591f2c9f166049855e5167e949d596ea038ac0] <==
	* I1128 03:04:41.148987       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1128 03:04:41.149821       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1128 03:04:41.149865       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1128 03:04:41.149957       1 aggregator.go:166] initial CRD sync complete...
	I1128 03:04:41.149991       1 autoregister_controller.go:141] Starting autoregister controller
	I1128 03:04:41.149997       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1128 03:04:41.150002       1 cache.go:39] Caches are synced for autoregister controller
	I1128 03:04:41.158330       1 controller.go:624] quota admission added evaluator for: namespaces
	I1128 03:04:41.160656       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1128 03:04:41.214604       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1128 03:04:42.047437       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1128 03:04:42.051633       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1128 03:04:42.052267       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1128 03:04:42.671319       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1128 03:04:42.724006       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1128 03:04:42.805862       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1128 03:04:42.838698       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.73]
	I1128 03:04:42.841345       1 controller.go:624] quota admission added evaluator for: endpoints
	I1128 03:04:42.860082       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1128 03:04:43.118154       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1128 03:04:44.238382       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1128 03:04:44.256544       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1128 03:04:44.275939       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1128 03:04:56.187611       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1128 03:04:56.870086       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	* 
	* ==> kube-controller-manager [84e423fed8a3554c225795c8e9d336bd5e0a19cc1d82765035789b56db036a16] <==
	* I1128 03:04:57.564428       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="67.155µs"
	I1128 03:05:01.889054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="347.772µs"
	I1128 03:05:01.927740       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.89µs"
	I1128 03:05:03.579271       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="177.827µs"
	I1128 03:05:03.624411       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.787562ms"
	I1128 03:05:03.626436       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.089µs"
	I1128 03:05:06.258118       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1128 03:05:47.141875       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-112998-m02\" does not exist"
	I1128 03:05:47.152896       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-112998-m02" podCIDRs=["10.244.1.0/24"]
	I1128 03:05:47.171773       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-v2g52"
	I1128 03:05:47.171825       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-jgxjs"
	I1128 03:05:51.265294       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-112998-m02"
	I1128 03:05:51.265426       1 event.go:307] "Event occurred" object="multinode-112998-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-112998-m02 event: Registered Node multinode-112998-m02 in Controller"
	I1128 03:05:56.128155       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-112998-m02"
	I1128 03:05:58.452003       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I1128 03:05:58.472138       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-cbjtg"
	I1128 03:05:58.479617       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-pmx8j"
	I1128 03:05:58.501817       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="49.50719ms"
	I1128 03:05:58.545011       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="42.922589ms"
	I1128 03:05:58.545282       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="119.246µs"
	I1128 03:05:58.545376       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="32.078µs"
	I1128 03:06:00.790780       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="5.125951ms"
	I1128 03:06:00.792135       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="97.05µs"
	I1128 03:06:00.806832       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="6.229292ms"
	I1128 03:06:00.807244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="94.179µs"
	
	* 
	* ==> kube-proxy [028a26d9be2d5d6a7e05bb89b06aee732c94aeecf3d30642e0fbd1170736f9e1] <==
	* I1128 03:04:58.243086       1 server_others.go:69] "Using iptables proxy"
	I1128 03:04:58.259527       1 node.go:141] Successfully retrieved node IP: 192.168.39.73
	I1128 03:04:58.314542       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1128 03:04:58.315239       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 03:04:58.319826       1 server_others.go:152] "Using iptables Proxier"
	I1128 03:04:58.319892       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 03:04:58.320053       1 server.go:846] "Version info" version="v1.28.4"
	I1128 03:04:58.320092       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 03:04:58.321732       1 config.go:188] "Starting service config controller"
	I1128 03:04:58.324538       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 03:04:58.323287       1 config.go:315] "Starting node config controller"
	I1128 03:04:58.324791       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 03:04:58.323373       1 config.go:97] "Starting endpoint slice config controller"
	I1128 03:04:58.324987       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 03:04:58.425331       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1128 03:04:58.425380       1 shared_informer.go:318] Caches are synced for service config
	I1128 03:04:58.425513       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [dc79cc3a790537e1d12c89615ba54f93edc29318cf03be7947345123f97fcc6a] <==
	* W1128 03:04:41.193809       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 03:04:41.193817       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1128 03:04:41.193854       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 03:04:41.193862       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1128 03:04:41.193947       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 03:04:41.193959       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1128 03:04:42.018598       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 03:04:42.018711       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1128 03:04:42.040119       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 03:04:42.040273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1128 03:04:42.120090       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 03:04:42.120268       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 03:04:42.181887       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 03:04:42.181972       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1128 03:04:42.198662       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1128 03:04:42.198741       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1128 03:04:42.273053       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 03:04:42.273157       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1128 03:04:42.302101       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 03:04:42.302158       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1128 03:04:42.367875       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1128 03:04:42.367934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1128 03:04:42.417480       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 03:04:42.417545       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I1128 03:04:44.968789       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 03:04:11 UTC, ends at Tue 2023-11-28 03:06:05 UTC. --
	Nov 28 03:04:56 multinode-112998 kubelet[1271]: I1128 03:04:56.264883    1271 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 28 03:04:56 multinode-112998 kubelet[1271]: I1128 03:04:56.265972    1271 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 28 03:04:56 multinode-112998 kubelet[1271]: E1128 03:04:56.361471    1271 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 28 03:04:56 multinode-112998 kubelet[1271]: E1128 03:04:56.361558    1271 projected.go:198] Error preparing data for projected volume kube-api-access-tbrzn for pod kube-system/kube-proxy-bmr6b: configmap "kube-root-ca.crt" not found
	Nov 28 03:04:56 multinode-112998 kubelet[1271]: E1128 03:04:56.361729    1271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0d9b86f2-025d-424d-a66f-ad3255685aca-kube-api-access-tbrzn podName:0d9b86f2-025d-424d-a66f-ad3255685aca nodeName:}" failed. No retries permitted until 2023-11-28 03:04:56.861642959 +0000 UTC m=+12.638294092 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tbrzn" (UniqueName: "kubernetes.io/projected/0d9b86f2-025d-424d-a66f-ad3255685aca-kube-api-access-tbrzn") pod "kube-proxy-bmr6b" (UID: "0d9b86f2-025d-424d-a66f-ad3255685aca") : configmap "kube-root-ca.crt" not found
	Nov 28 03:04:56 multinode-112998 kubelet[1271]: E1128 03:04:56.363398    1271 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 28 03:04:56 multinode-112998 kubelet[1271]: E1128 03:04:56.363417    1271 projected.go:198] Error preparing data for projected volume kube-api-access-srss4 for pod kube-system/kindnet-5pfcd: configmap "kube-root-ca.crt" not found
	Nov 28 03:04:56 multinode-112998 kubelet[1271]: E1128 03:04:56.363451    1271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/370f4bc7-f3dd-456e-b67a-fff569e42ac1-kube-api-access-srss4 podName:370f4bc7-f3dd-456e-b67a-fff569e42ac1 nodeName:}" failed. No retries permitted until 2023-11-28 03:04:56.863439129 +0000 UTC m=+12.640090268 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-srss4" (UniqueName: "kubernetes.io/projected/370f4bc7-f3dd-456e-b67a-fff569e42ac1-kube-api-access-srss4") pod "kindnet-5pfcd" (UID: "370f4bc7-f3dd-456e-b67a-fff569e42ac1") : configmap "kube-root-ca.crt" not found
	Nov 28 03:05:01 multinode-112998 kubelet[1271]: I1128 03:05:01.565629    1271 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bmr6b" podStartSLOduration=5.565574968 podCreationTimestamp="2023-11-28 03:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-28 03:04:58.551872031 +0000 UTC m=+14.328523179" watchObservedRunningTime="2023-11-28 03:05:01.565574968 +0000 UTC m=+17.342226115"
	Nov 28 03:05:01 multinode-112998 kubelet[1271]: I1128 03:05:01.565792    1271 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-5pfcd" podStartSLOduration=5.565772436 podCreationTimestamp="2023-11-28 03:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-28 03:05:01.565550371 +0000 UTC m=+17.342201518" watchObservedRunningTime="2023-11-28 03:05:01.565772436 +0000 UTC m=+17.342423584"
	Nov 28 03:05:01 multinode-112998 kubelet[1271]: I1128 03:05:01.838665    1271 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 28 03:05:01 multinode-112998 kubelet[1271]: I1128 03:05:01.878746    1271 topology_manager.go:215] "Topology Admit Handler" podUID="80d85aa0-5ee8-48db-a570-fdde6138e079" podNamespace="kube-system" podName="storage-provisioner"
	Nov 28 03:05:01 multinode-112998 kubelet[1271]: I1128 03:05:01.885098    1271 topology_manager.go:215] "Topology Admit Handler" podUID="0d5cae9f-6647-42f9-a8e7-1f14dc9fa422" podNamespace="kube-system" podName="coredns-5dd5756b68-sd64m"
	Nov 28 03:05:01 multinode-112998 kubelet[1271]: I1128 03:05:01.893245    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/80d85aa0-5ee8-48db-a570-fdde6138e079-tmp\") pod \"storage-provisioner\" (UID: \"80d85aa0-5ee8-48db-a570-fdde6138e079\") " pod="kube-system/storage-provisioner"
	Nov 28 03:05:01 multinode-112998 kubelet[1271]: I1128 03:05:01.893278    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d5cae9f-6647-42f9-a8e7-1f14dc9fa422-config-volume\") pod \"coredns-5dd5756b68-sd64m\" (UID: \"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422\") " pod="kube-system/coredns-5dd5756b68-sd64m"
	Nov 28 03:05:01 multinode-112998 kubelet[1271]: I1128 03:05:01.893300    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28qtr\" (UniqueName: \"kubernetes.io/projected/0d5cae9f-6647-42f9-a8e7-1f14dc9fa422-kube-api-access-28qtr\") pod \"coredns-5dd5756b68-sd64m\" (UID: \"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422\") " pod="kube-system/coredns-5dd5756b68-sd64m"
	Nov 28 03:05:01 multinode-112998 kubelet[1271]: I1128 03:05:01.893321    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-277cn\" (UniqueName: \"kubernetes.io/projected/80d85aa0-5ee8-48db-a570-fdde6138e079-kube-api-access-277cn\") pod \"storage-provisioner\" (UID: \"80d85aa0-5ee8-48db-a570-fdde6138e079\") " pod="kube-system/storage-provisioner"
	Nov 28 03:05:03 multinode-112998 kubelet[1271]: I1128 03:05:03.578751    1271 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-sd64m" podStartSLOduration=6.578711878 podCreationTimestamp="2023-11-28 03:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-28 03:05:03.576074835 +0000 UTC m=+19.352725983" watchObservedRunningTime="2023-11-28 03:05:03.578711878 +0000 UTC m=+19.355363025"
	Nov 28 03:05:03 multinode-112998 kubelet[1271]: I1128 03:05:03.611480    1271 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.611444627 podCreationTimestamp="2023-11-28 03:04:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-11-28 03:05:03.596090717 +0000 UTC m=+19.372741865" watchObservedRunningTime="2023-11-28 03:05:03.611444627 +0000 UTC m=+19.388095774"
	Nov 28 03:05:44 multinode-112998 kubelet[1271]: E1128 03:05:44.450995    1271 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 03:05:44 multinode-112998 kubelet[1271]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 03:05:44 multinode-112998 kubelet[1271]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 03:05:44 multinode-112998 kubelet[1271]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 03:05:58 multinode-112998 kubelet[1271]: I1128 03:05:58.493416    1271 topology_manager.go:215] "Topology Admit Handler" podUID="7feaf891-161d-47cb-842c-1357fb63956c" podNamespace="default" podName="busybox-5bc68d56bd-pmx8j"
	Nov 28 03:05:58 multinode-112998 kubelet[1271]: I1128 03:05:58.543563    1271 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5q5c\" (UniqueName: \"kubernetes.io/projected/7feaf891-161d-47cb-842c-1357fb63956c-kube-api-access-p5q5c\") pod \"busybox-5bc68d56bd-pmx8j\" (UID: \"7feaf891-161d-47cb-842c-1357fb63956c\") " pod="default/busybox-5bc68d56bd-pmx8j"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-112998 -n multinode-112998
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-112998 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.37s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (701.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-112998
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-112998
E1128 03:08:34.222426  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 03:08:43.673705  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-112998: exit status 82 (2m0.80867654s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-112998"  ...
	* Stopping node "multinode-112998"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-112998" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-112998 --wait=true -v=8 --alsologtostderr
E1128 03:10:06.721904  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 03:11:23.483939  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:13:34.222649  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 03:13:43.674043  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 03:14:57.270050  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 03:16:23.484264  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:17:46.530144  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:18:34.222566  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 03:18:43.674134  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-112998 --wait=true -v=8 --alsologtostderr: (9m37.247164827s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-112998
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-112998 -n multinode-112998
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-112998 logs -n 25: (1.641022475s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-112998 ssh -n                                                                 | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | multinode-112998-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-112998 cp multinode-112998-m02:/home/docker/cp-test.txt                       | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2985400018/001/cp-test_multinode-112998-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-112998 ssh -n                                                                 | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | multinode-112998-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-112998 cp multinode-112998-m02:/home/docker/cp-test.txt                       | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | multinode-112998:/home/docker/cp-test_multinode-112998-m02_multinode-112998.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-112998 ssh -n                                                                 | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | multinode-112998-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-112998 ssh -n multinode-112998 sudo cat                                       | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | /home/docker/cp-test_multinode-112998-m02_multinode-112998.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-112998 cp multinode-112998-m02:/home/docker/cp-test.txt                       | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:06 UTC |
	|         | multinode-112998-m03:/home/docker/cp-test_multinode-112998-m02_multinode-112998-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-112998 ssh -n                                                                 | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:06 UTC | 28 Nov 23 03:07 UTC |
	|         | multinode-112998-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-112998 ssh -n multinode-112998-m03 sudo cat                                   | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:07 UTC | 28 Nov 23 03:07 UTC |
	|         | /home/docker/cp-test_multinode-112998-m02_multinode-112998-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-112998 cp testdata/cp-test.txt                                                | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:07 UTC | 28 Nov 23 03:07 UTC |
	|         | multinode-112998-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-112998 ssh -n                                                                 | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:07 UTC | 28 Nov 23 03:07 UTC |
	|         | multinode-112998-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-112998 cp multinode-112998-m03:/home/docker/cp-test.txt                       | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:07 UTC | 28 Nov 23 03:07 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2985400018/001/cp-test_multinode-112998-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-112998 ssh -n                                                                 | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:07 UTC | 28 Nov 23 03:07 UTC |
	|         | multinode-112998-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-112998 cp multinode-112998-m03:/home/docker/cp-test.txt                       | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:07 UTC | 28 Nov 23 03:07 UTC |
	|         | multinode-112998:/home/docker/cp-test_multinode-112998-m03_multinode-112998.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-112998 ssh -n                                                                 | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:07 UTC | 28 Nov 23 03:07 UTC |
	|         | multinode-112998-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-112998 ssh -n multinode-112998 sudo cat                                       | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:07 UTC | 28 Nov 23 03:07 UTC |
	|         | /home/docker/cp-test_multinode-112998-m03_multinode-112998.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-112998 cp multinode-112998-m03:/home/docker/cp-test.txt                       | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:07 UTC | 28 Nov 23 03:07 UTC |
	|         | multinode-112998-m02:/home/docker/cp-test_multinode-112998-m03_multinode-112998-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-112998 ssh -n                                                                 | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:07 UTC | 28 Nov 23 03:07 UTC |
	|         | multinode-112998-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-112998 ssh -n multinode-112998-m02 sudo cat                                   | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:07 UTC | 28 Nov 23 03:07 UTC |
	|         | /home/docker/cp-test_multinode-112998-m03_multinode-112998-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-112998 node stop m03                                                          | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:07 UTC | 28 Nov 23 03:07 UTC |
	| node    | multinode-112998 node start                                                             | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:07 UTC | 28 Nov 23 03:07 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-112998                                                                | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:07 UTC |                     |
	| stop    | -p multinode-112998                                                                     | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:07 UTC |                     |
	| start   | -p multinode-112998                                                                     | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:09 UTC | 28 Nov 23 03:19 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-112998                                                                | multinode-112998 | jenkins | v1.32.0 | 28 Nov 23 03:19 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 03:09:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 03:09:36.429451  356731 out.go:296] Setting OutFile to fd 1 ...
	I1128 03:09:36.429741  356731 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:09:36.429751  356731 out.go:309] Setting ErrFile to fd 2...
	I1128 03:09:36.429756  356731 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:09:36.429999  356731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 03:09:36.430638  356731 out.go:303] Setting JSON to false
	I1128 03:09:36.431654  356731 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6727,"bootTime":1701134250,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 03:09:36.431716  356731 start.go:138] virtualization: kvm guest
	I1128 03:09:36.434609  356731 out.go:177] * [multinode-112998] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 03:09:36.436442  356731 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 03:09:36.438017  356731 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 03:09:36.436446  356731 notify.go:220] Checking for updates...
	I1128 03:09:36.440855  356731 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:09:36.442311  356731 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 03:09:36.443550  356731 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 03:09:36.444735  356731 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 03:09:36.446389  356731 config.go:182] Loaded profile config "multinode-112998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:09:36.446508  356731 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 03:09:36.446922  356731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:09:36.446970  356731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:09:36.461822  356731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43129
	I1128 03:09:36.462374  356731 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:09:36.462992  356731 main.go:141] libmachine: Using API Version  1
	I1128 03:09:36.463019  356731 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:09:36.463348  356731 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:09:36.463503  356731 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:09:36.499005  356731 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 03:09:36.500543  356731 start.go:298] selected driver: kvm2
	I1128 03:09:36.500567  356731 start.go:902] validating driver "kvm2" against &{Name:multinode-112998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-112998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.192 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false
ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 03:09:36.500714  356731 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 03:09:36.501074  356731 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:09:36.501143  356731 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 03:09:36.516153  356731 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 03:09:36.516790  356731 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 03:09:36.516870  356731 cni.go:84] Creating CNI manager for ""
	I1128 03:09:36.516898  356731 cni.go:136] 3 nodes found, recommending kindnet
	I1128 03:09:36.516914  356731 start_flags.go:323] config:
	{Name:multinode-112998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-112998 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.192 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-prov
isioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 03:09:36.517185  356731 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:09:36.519242  356731 out.go:177] * Starting control plane node multinode-112998 in cluster multinode-112998
	I1128 03:09:36.520478  356731 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 03:09:36.520517  356731 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 03:09:36.520531  356731 cache.go:56] Caching tarball of preloaded images
	I1128 03:09:36.520618  356731 preload.go:174] Found /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 03:09:36.520633  356731 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 03:09:36.520787  356731 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/config.json ...
	I1128 03:09:36.521026  356731 start.go:365] acquiring machines lock for multinode-112998: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 03:09:36.521070  356731 start.go:369] acquired machines lock for "multinode-112998" in 25.587µs
	I1128 03:09:36.521081  356731 start.go:96] Skipping create...Using existing machine configuration
	I1128 03:09:36.521087  356731 fix.go:54] fixHost starting: 
	I1128 03:09:36.521348  356731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:09:36.521381  356731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:09:36.535340  356731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39309
	I1128 03:09:36.535834  356731 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:09:36.536352  356731 main.go:141] libmachine: Using API Version  1
	I1128 03:09:36.536386  356731 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:09:36.536738  356731 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:09:36.536962  356731 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:09:36.537117  356731 main.go:141] libmachine: (multinode-112998) Calling .GetState
	I1128 03:09:36.538760  356731 fix.go:102] recreateIfNeeded on multinode-112998: state=Running err=<nil>
	W1128 03:09:36.538776  356731 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 03:09:36.540709  356731 out.go:177] * Updating the running kvm2 "multinode-112998" VM ...
	I1128 03:09:36.541929  356731 machine.go:88] provisioning docker machine ...
	I1128 03:09:36.541951  356731 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:09:36.542155  356731 main.go:141] libmachine: (multinode-112998) Calling .GetMachineName
	I1128 03:09:36.542338  356731 buildroot.go:166] provisioning hostname "multinode-112998"
	I1128 03:09:36.542357  356731 main.go:141] libmachine: (multinode-112998) Calling .GetMachineName
	I1128 03:09:36.542523  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:09:36.545245  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:09:36.545722  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:09:36.545757  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:09:36.545878  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:09:36.546046  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:09:36.546165  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:09:36.546257  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:09:36.546435  356731 main.go:141] libmachine: Using SSH client type: native
	I1128 03:09:36.546836  356731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I1128 03:09:36.546852  356731 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-112998 && echo "multinode-112998" | sudo tee /etc/hostname
	I1128 03:09:55.101235  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:10:01.181222  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:10:04.253167  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:10:10.333203  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:10:13.405188  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:10:19.485214  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:10:22.557207  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:10:28.637164  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:10:31.709220  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:10:37.789170  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:10:40.861159  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:10:46.941149  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:10:50.013240  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:10:56.093165  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:10:59.165214  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:11:05.245193  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:11:08.317153  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:11:14.397139  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:11:17.469150  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:11:23.549179  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:11:26.621208  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:11:32.701199  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:11:35.773153  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:11:41.853181  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:11:44.925225  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:11:51.005193  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:11:54.077280  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:12:00.157207  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:12:03.229250  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:12:09.309195  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:12:12.381245  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:12:18.461180  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:12:21.533243  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:12:27.613173  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:12:30.685135  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:12:36.765210  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:12:39.837176  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:12:45.917182  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:12:48.989247  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:12:55.069238  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:12:58.141236  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:13:04.221220  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:13:07.293258  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:13:13.373216  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:13:16.445122  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:13:22.525232  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:13:25.597188  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:13:31.677232  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:13:34.749156  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:13:40.829401  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:13:43.901286  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:13:49.981166  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:13:53.053199  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:13:59.133177  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:14:02.205207  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:14:08.285201  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:14:11.357116  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:14:17.437187  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:14:20.509261  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:14:26.589225  356731 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.73:22: connect: no route to host
	I1128 03:14:29.591938  356731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 03:14:29.591995  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:14:29.594123  356731 machine.go:91] provisioned docker machine in 4m53.052172476s
	I1128 03:14:29.594203  356731 fix.go:56] fixHost completed within 4m53.073115934s
	I1128 03:14:29.594214  356731 start.go:83] releasing machines lock for "multinode-112998", held for 4m53.073138397s
	W1128 03:14:29.594232  356731 start.go:691] error starting host: provision: host is not running
	W1128 03:14:29.594332  356731 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1128 03:14:29.594346  356731 start.go:706] Will try again in 5 seconds ...
	I1128 03:14:34.597492  356731 start.go:365] acquiring machines lock for multinode-112998: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 03:14:34.597614  356731 start.go:369] acquired machines lock for "multinode-112998" in 75.044µs
	I1128 03:14:34.597649  356731 start.go:96] Skipping create...Using existing machine configuration
	I1128 03:14:34.597658  356731 fix.go:54] fixHost starting: 
	I1128 03:14:34.598018  356731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:14:34.598045  356731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:14:34.613404  356731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41465
	I1128 03:14:34.613957  356731 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:14:34.614623  356731 main.go:141] libmachine: Using API Version  1
	I1128 03:14:34.614718  356731 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:14:34.615068  356731 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:14:34.615291  356731 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:14:34.615473  356731 main.go:141] libmachine: (multinode-112998) Calling .GetState
	I1128 03:14:34.617446  356731 fix.go:102] recreateIfNeeded on multinode-112998: state=Stopped err=<nil>
	I1128 03:14:34.617472  356731 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	W1128 03:14:34.617643  356731 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 03:14:34.621352  356731 out.go:177] * Restarting existing kvm2 VM for "multinode-112998" ...
	I1128 03:14:34.622943  356731 main.go:141] libmachine: (multinode-112998) Calling .Start
	I1128 03:14:34.623109  356731 main.go:141] libmachine: (multinode-112998) Ensuring networks are active...
	I1128 03:14:34.623931  356731 main.go:141] libmachine: (multinode-112998) Ensuring network default is active
	I1128 03:14:34.624312  356731 main.go:141] libmachine: (multinode-112998) Ensuring network mk-multinode-112998 is active
	I1128 03:14:34.624710  356731 main.go:141] libmachine: (multinode-112998) Getting domain xml...
	I1128 03:14:34.625417  356731 main.go:141] libmachine: (multinode-112998) Creating domain...
	I1128 03:14:35.850379  356731 main.go:141] libmachine: (multinode-112998) Waiting to get IP...
	I1128 03:14:35.851213  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:35.851685  356731 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:14:35.851746  356731 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:14:35.851665  357561 retry.go:31] will retry after 302.820101ms: waiting for machine to come up
	I1128 03:14:36.156424  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:36.156958  356731 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:14:36.156988  356731 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:14:36.156897  357561 retry.go:31] will retry after 324.97007ms: waiting for machine to come up
	I1128 03:14:36.483518  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:36.484010  356731 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:14:36.484038  356731 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:14:36.483959  357561 retry.go:31] will retry after 463.243855ms: waiting for machine to come up
	I1128 03:14:36.948547  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:36.949029  356731 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:14:36.949059  356731 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:14:36.948969  357561 retry.go:31] will retry after 581.856542ms: waiting for machine to come up
	I1128 03:14:37.532438  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:37.532854  356731 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:14:37.532896  356731 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:14:37.532800  357561 retry.go:31] will retry after 696.411262ms: waiting for machine to come up
	I1128 03:14:38.230591  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:38.231067  356731 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:14:38.231103  356731 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:14:38.231005  357561 retry.go:31] will retry after 641.496555ms: waiting for machine to come up
	I1128 03:14:38.873700  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:38.874153  356731 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:14:38.874180  356731 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:14:38.874102  357561 retry.go:31] will retry after 868.073398ms: waiting for machine to come up
	I1128 03:14:39.743461  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:39.743828  356731 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:14:39.743851  356731 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:14:39.743782  357561 retry.go:31] will retry after 1.127073942s: waiting for machine to come up
	I1128 03:14:40.872507  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:40.872940  356731 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:14:40.872974  356731 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:14:40.872876  357561 retry.go:31] will retry after 1.260530229s: waiting for machine to come up
	I1128 03:14:42.135418  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:42.135793  356731 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:14:42.135826  356731 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:14:42.135757  357561 retry.go:31] will retry after 1.575184284s: waiting for machine to come up
	I1128 03:14:43.713724  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:43.714272  356731 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:14:43.714302  356731 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:14:43.714205  357561 retry.go:31] will retry after 2.600821221s: waiting for machine to come up
	I1128 03:14:46.317996  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:46.318378  356731 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:14:46.318407  356731 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:14:46.318355  357561 retry.go:31] will retry after 3.105127148s: waiting for machine to come up
	I1128 03:14:49.427620  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:49.427915  356731 main.go:141] libmachine: (multinode-112998) DBG | unable to find current IP address of domain multinode-112998 in network mk-multinode-112998
	I1128 03:14:49.427937  356731 main.go:141] libmachine: (multinode-112998) DBG | I1128 03:14:49.427889  357561 retry.go:31] will retry after 3.15140428s: waiting for machine to come up
	I1128 03:14:52.581918  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:52.582469  356731 main.go:141] libmachine: (multinode-112998) Found IP for machine: 192.168.39.73
	I1128 03:14:52.582504  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has current primary IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:52.582515  356731 main.go:141] libmachine: (multinode-112998) Reserving static IP address...
	I1128 03:14:52.582903  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "multinode-112998", mac: "52:54:00:78:69:e6", ip: "192.168.39.73"} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:14:52.582931  356731 main.go:141] libmachine: (multinode-112998) DBG | skip adding static IP to network mk-multinode-112998 - found existing host DHCP lease matching {name: "multinode-112998", mac: "52:54:00:78:69:e6", ip: "192.168.39.73"}
	I1128 03:14:52.582941  356731 main.go:141] libmachine: (multinode-112998) Reserved static IP address: 192.168.39.73
	I1128 03:14:52.582954  356731 main.go:141] libmachine: (multinode-112998) Waiting for SSH to be available...
	I1128 03:14:52.582968  356731 main.go:141] libmachine: (multinode-112998) DBG | Getting to WaitForSSH function...
	I1128 03:14:52.585316  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:52.585684  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:14:52.585714  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:52.585849  356731 main.go:141] libmachine: (multinode-112998) DBG | Using SSH client type: external
	I1128 03:14:52.585882  356731 main.go:141] libmachine: (multinode-112998) DBG | Using SSH private key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa (-rw-------)
	I1128 03:14:52.585917  356731 main.go:141] libmachine: (multinode-112998) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.73 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 03:14:52.585934  356731 main.go:141] libmachine: (multinode-112998) DBG | About to run SSH command:
	I1128 03:14:52.585952  356731 main.go:141] libmachine: (multinode-112998) DBG | exit 0
	I1128 03:14:52.673381  356731 main.go:141] libmachine: (multinode-112998) DBG | SSH cmd err, output: <nil>: 
	I1128 03:14:52.673792  356731 main.go:141] libmachine: (multinode-112998) Calling .GetConfigRaw
	I1128 03:14:52.674477  356731 main.go:141] libmachine: (multinode-112998) Calling .GetIP
	I1128 03:14:52.677275  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:52.677853  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:14:52.677906  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:52.678278  356731 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/config.json ...
	I1128 03:14:52.678561  356731 machine.go:88] provisioning docker machine ...
	I1128 03:14:52.678587  356731 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:14:52.678829  356731 main.go:141] libmachine: (multinode-112998) Calling .GetMachineName
	I1128 03:14:52.679026  356731 buildroot.go:166] provisioning hostname "multinode-112998"
	I1128 03:14:52.679054  356731 main.go:141] libmachine: (multinode-112998) Calling .GetMachineName
	I1128 03:14:52.679287  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:14:52.681973  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:52.682353  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:14:52.682383  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:52.682582  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:14:52.682808  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:14:52.682963  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:14:52.683091  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:14:52.683269  356731 main.go:141] libmachine: Using SSH client type: native
	I1128 03:14:52.683612  356731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I1128 03:14:52.683626  356731 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-112998 && echo "multinode-112998" | sudo tee /etc/hostname
	I1128 03:14:52.813681  356731 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-112998
	
	I1128 03:14:52.813721  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:14:52.816276  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:52.816655  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:14:52.816687  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:52.816932  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:14:52.817175  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:14:52.817354  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:14:52.817539  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:14:52.817718  356731 main.go:141] libmachine: Using SSH client type: native
	I1128 03:14:52.818059  356731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I1128 03:14:52.818087  356731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-112998' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-112998/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-112998' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 03:14:52.937388  356731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 03:14:52.937428  356731 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 03:14:52.937451  356731 buildroot.go:174] setting up certificates
	I1128 03:14:52.937463  356731 provision.go:83] configureAuth start
	I1128 03:14:52.937477  356731 main.go:141] libmachine: (multinode-112998) Calling .GetMachineName
	I1128 03:14:52.937771  356731 main.go:141] libmachine: (multinode-112998) Calling .GetIP
	I1128 03:14:52.940488  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:52.940873  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:14:52.940910  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:52.941091  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:14:52.943338  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:52.943651  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:14:52.943686  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:52.943773  356731 provision.go:138] copyHostCerts
	I1128 03:14:52.943803  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 03:14:52.943832  356731 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 03:14:52.943841  356731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 03:14:52.943903  356731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 03:14:52.943995  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 03:14:52.944012  356731 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 03:14:52.944018  356731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 03:14:52.944043  356731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 03:14:52.944085  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 03:14:52.944102  356731 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 03:14:52.944106  356731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 03:14:52.944125  356731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 03:14:52.944169  356731 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.multinode-112998 san=[192.168.39.73 192.168.39.73 localhost 127.0.0.1 minikube multinode-112998]
	I1128 03:14:53.030536  356731 provision.go:172] copyRemoteCerts
	I1128 03:14:53.030615  356731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 03:14:53.030642  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:14:53.033636  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:53.033987  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:14:53.034016  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:53.034188  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:14:53.034340  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:14:53.034531  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:14:53.034632  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:14:53.118044  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1128 03:14:53.118120  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 03:14:53.141396  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1128 03:14:53.141464  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1128 03:14:53.165053  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1128 03:14:53.165145  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 03:14:53.188924  356731 provision.go:86] duration metric: configureAuth took 251.44578ms
	I1128 03:14:53.188953  356731 buildroot.go:189] setting minikube options for container-runtime
	I1128 03:14:53.189187  356731 config.go:182] Loaded profile config "multinode-112998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:14:53.189263  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:14:53.191895  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:53.192289  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:14:53.192326  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:53.192434  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:14:53.192629  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:14:53.192839  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:14:53.193011  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:14:53.193189  356731 main.go:141] libmachine: Using SSH client type: native
	I1128 03:14:53.193516  356731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I1128 03:14:53.193531  356731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 03:14:53.504307  356731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 03:14:53.504335  356731 machine.go:91] provisioned docker machine in 825.758979ms
	I1128 03:14:53.504346  356731 start.go:300] post-start starting for "multinode-112998" (driver="kvm2")
	I1128 03:14:53.504357  356731 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 03:14:53.504374  356731 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:14:53.504713  356731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 03:14:53.504748  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:14:53.507473  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:53.507837  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:14:53.507869  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:53.508004  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:14:53.508219  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:14:53.508384  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:14:53.508508  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:14:53.594648  356731 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 03:14:53.598521  356731 command_runner.go:130] > NAME=Buildroot
	I1128 03:14:53.598541  356731 command_runner.go:130] > VERSION=2021.02.12-1-g21ec34a-dirty
	I1128 03:14:53.598545  356731 command_runner.go:130] > ID=buildroot
	I1128 03:14:53.598552  356731 command_runner.go:130] > VERSION_ID=2021.02.12
	I1128 03:14:53.598561  356731 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1128 03:14:53.598657  356731 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 03:14:53.598687  356731 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 03:14:53.598764  356731 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 03:14:53.598861  356731 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 03:14:53.598877  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> /etc/ssl/certs/3405152.pem
	I1128 03:14:53.598991  356731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 03:14:53.608109  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 03:14:53.630990  356731 start.go:303] post-start completed in 126.627685ms
	I1128 03:14:53.631017  356731 fix.go:56] fixHost completed within 19.033359292s
	I1128 03:14:53.631040  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:14:53.633497  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:53.634072  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:14:53.634134  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:53.634257  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:14:53.634483  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:14:53.634614  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:14:53.634753  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:14:53.634908  356731 main.go:141] libmachine: Using SSH client type: native
	I1128 03:14:53.635358  356731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.73 22 <nil> <nil>}
	I1128 03:14:53.635376  356731 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 03:14:53.749856  356731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701141293.696036091
	
	I1128 03:14:53.749883  356731 fix.go:206] guest clock: 1701141293.696036091
	I1128 03:14:53.749894  356731 fix.go:219] Guest: 2023-11-28 03:14:53.696036091 +0000 UTC Remote: 2023-11-28 03:14:53.631020788 +0000 UTC m=+317.255528567 (delta=65.015303ms)
	I1128 03:14:53.749919  356731 fix.go:190] guest clock delta is within tolerance: 65.015303ms
	I1128 03:14:53.749925  356731 start.go:83] releasing machines lock for "multinode-112998", held for 19.152296246s
	I1128 03:14:53.749947  356731 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:14:53.750232  356731 main.go:141] libmachine: (multinode-112998) Calling .GetIP
	I1128 03:14:53.752764  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:53.753130  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:14:53.753162  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:53.753305  356731 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:14:53.753997  356731 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:14:53.754180  356731 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:14:53.754271  356731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 03:14:53.754320  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:14:53.754429  356731 ssh_runner.go:195] Run: cat /version.json
	I1128 03:14:53.754474  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:14:53.756845  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:53.757236  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:14:53.757281  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:53.757307  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:53.757409  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:14:53.757609  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:14:53.757672  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:14:53.757699  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:53.757793  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:14:53.757877  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:14:53.757963  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:14:53.758022  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:14:53.758124  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:14:53.758246  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:14:53.837486  356731 command_runner.go:130] > {"iso_version": "v1.32.1-1700142131-17634", "kicbase_version": "v0.0.42-1699485386-17565", "minikube_version": "v1.32.0", "commit": "6532cab52e164d1138ecb8469e77a57a00b45825"}
	I1128 03:14:53.838383  356731 ssh_runner.go:195] Run: systemctl --version
	I1128 03:14:53.862217  356731 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1128 03:14:53.862524  356731 command_runner.go:130] > systemd 247 (247)
	I1128 03:14:53.862558  356731 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I1128 03:14:53.862634  356731 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 03:14:54.003439  356731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 03:14:54.009401  356731 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1128 03:14:54.009802  356731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 03:14:54.009891  356731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 03:14:54.024900  356731 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I1128 03:14:54.024979  356731 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 03:14:54.024995  356731 start.go:472] detecting cgroup driver to use...
	I1128 03:14:54.025093  356731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 03:14:54.038421  356731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 03:14:54.050480  356731 docker.go:203] disabling cri-docker service (if available) ...
	I1128 03:14:54.050553  356731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 03:14:54.064347  356731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 03:14:54.076867  356731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 03:14:54.090476  356731 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I1128 03:14:54.180603  356731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 03:14:54.300462  356731 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I1128 03:14:54.300505  356731 docker.go:219] disabling docker service ...
	I1128 03:14:54.300576  356731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 03:14:54.314414  356731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 03:14:54.326477  356731 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I1128 03:14:54.326580  356731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 03:14:54.340782  356731 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I1128 03:14:54.426570  356731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 03:14:54.525057  356731 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I1128 03:14:54.525110  356731 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I1128 03:14:54.525188  356731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 03:14:54.539349  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 03:14:54.556334  356731 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1128 03:14:54.556423  356731 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 03:14:54.556481  356731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:14:54.566607  356731 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 03:14:54.566682  356731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:14:54.577041  356731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:14:54.587176  356731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:14:54.597183  356731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 03:14:54.607466  356731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 03:14:54.616236  356731 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 03:14:54.616333  356731 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 03:14:54.616390  356731 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 03:14:54.630576  356731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 03:14:54.639618  356731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 03:14:54.745291  356731 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 03:14:54.914892  356731 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 03:14:54.914981  356731 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 03:14:54.919965  356731 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1128 03:14:54.919993  356731 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1128 03:14:54.919999  356731 command_runner.go:130] > Device: 16h/22d	Inode: 774         Links: 1
	I1128 03:14:54.920006  356731 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 03:14:54.920011  356731 command_runner.go:130] > Access: 2023-11-28 03:14:54.846335571 +0000
	I1128 03:14:54.920023  356731 command_runner.go:130] > Modify: 2023-11-28 03:14:54.846335571 +0000
	I1128 03:14:54.920029  356731 command_runner.go:130] > Change: 2023-11-28 03:14:54.846335571 +0000
	I1128 03:14:54.920033  356731 command_runner.go:130] >  Birth: -
	I1128 03:14:54.920265  356731 start.go:540] Will wait 60s for crictl version
	I1128 03:14:54.920328  356731 ssh_runner.go:195] Run: which crictl
	I1128 03:14:54.923766  356731 command_runner.go:130] > /usr/bin/crictl
	I1128 03:14:54.923916  356731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 03:14:54.965619  356731 command_runner.go:130] > Version:  0.1.0
	I1128 03:14:54.965723  356731 command_runner.go:130] > RuntimeName:  cri-o
	I1128 03:14:54.966030  356731 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1128 03:14:54.966332  356731 command_runner.go:130] > RuntimeApiVersion:  v1
	I1128 03:14:54.967982  356731 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 03:14:54.968074  356731 ssh_runner.go:195] Run: crio --version
	I1128 03:14:55.013679  356731 command_runner.go:130] > crio version 1.24.1
	I1128 03:14:55.013712  356731 command_runner.go:130] > Version:          1.24.1
	I1128 03:14:55.013722  356731 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 03:14:55.013727  356731 command_runner.go:130] > GitTreeState:     dirty
	I1128 03:14:55.013735  356731 command_runner.go:130] > BuildDate:        2023-11-16T19:10:07Z
	I1128 03:14:55.013743  356731 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 03:14:55.013755  356731 command_runner.go:130] > Compiler:         gc
	I1128 03:14:55.013767  356731 command_runner.go:130] > Platform:         linux/amd64
	I1128 03:14:55.013775  356731 command_runner.go:130] > Linkmode:         dynamic
	I1128 03:14:55.013785  356731 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 03:14:55.013797  356731 command_runner.go:130] > SeccompEnabled:   true
	I1128 03:14:55.013804  356731 command_runner.go:130] > AppArmorEnabled:  false
	I1128 03:14:55.015134  356731 ssh_runner.go:195] Run: crio --version
	I1128 03:14:55.057092  356731 command_runner.go:130] > crio version 1.24.1
	I1128 03:14:55.057123  356731 command_runner.go:130] > Version:          1.24.1
	I1128 03:14:55.057133  356731 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 03:14:55.057140  356731 command_runner.go:130] > GitTreeState:     dirty
	I1128 03:14:55.057150  356731 command_runner.go:130] > BuildDate:        2023-11-16T19:10:07Z
	I1128 03:14:55.057157  356731 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 03:14:55.057164  356731 command_runner.go:130] > Compiler:         gc
	I1128 03:14:55.057175  356731 command_runner.go:130] > Platform:         linux/amd64
	I1128 03:14:55.057182  356731 command_runner.go:130] > Linkmode:         dynamic
	I1128 03:14:55.057193  356731 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 03:14:55.057202  356731 command_runner.go:130] > SeccompEnabled:   true
	I1128 03:14:55.057208  356731 command_runner.go:130] > AppArmorEnabled:  false
	I1128 03:14:55.060446  356731 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 03:14:55.061756  356731 main.go:141] libmachine: (multinode-112998) Calling .GetIP
	I1128 03:14:55.064542  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:55.064948  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:14:55.064981  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:14:55.065153  356731 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 03:14:55.069082  356731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 03:14:55.081565  356731 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 03:14:55.081621  356731 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 03:14:55.114749  356731 command_runner.go:130] > {
	I1128 03:14:55.114771  356731 command_runner.go:130] >   "images": [
	I1128 03:14:55.114776  356731 command_runner.go:130] >     {
	I1128 03:14:55.114784  356731 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1128 03:14:55.114789  356731 command_runner.go:130] >       "repoTags": [
	I1128 03:14:55.114795  356731 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1128 03:14:55.114799  356731 command_runner.go:130] >       ],
	I1128 03:14:55.114803  356731 command_runner.go:130] >       "repoDigests": [
	I1128 03:14:55.114811  356731 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1128 03:14:55.114826  356731 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1128 03:14:55.114831  356731 command_runner.go:130] >       ],
	I1128 03:14:55.114835  356731 command_runner.go:130] >       "size": "750414",
	I1128 03:14:55.114839  356731 command_runner.go:130] >       "uid": {
	I1128 03:14:55.114844  356731 command_runner.go:130] >         "value": "65535"
	I1128 03:14:55.114850  356731 command_runner.go:130] >       },
	I1128 03:14:55.114854  356731 command_runner.go:130] >       "username": "",
	I1128 03:14:55.114863  356731 command_runner.go:130] >       "spec": null,
	I1128 03:14:55.114870  356731 command_runner.go:130] >       "pinned": false
	I1128 03:14:55.114874  356731 command_runner.go:130] >     }
	I1128 03:14:55.114880  356731 command_runner.go:130] >   ]
	I1128 03:14:55.114885  356731 command_runner.go:130] > }
	I1128 03:14:55.115980  356731 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 03:14:55.116042  356731 ssh_runner.go:195] Run: which lz4
	I1128 03:14:55.119734  356731 command_runner.go:130] > /usr/bin/lz4
	I1128 03:14:55.119770  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1128 03:14:55.119857  356731 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 03:14:55.123607  356731 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 03:14:55.123835  356731 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 03:14:55.123870  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 03:14:56.968364  356731 crio.go:444] Took 1.848539 seconds to copy over tarball
	I1128 03:14:56.968446  356731 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 03:14:59.770245  356731 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.801767366s)
	I1128 03:14:59.770282  356731 crio.go:451] Took 2.801882 seconds to extract the tarball
	I1128 03:14:59.770293  356731 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 03:14:59.810151  356731 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 03:14:59.855151  356731 command_runner.go:130] > {
	I1128 03:14:59.855175  356731 command_runner.go:130] >   "images": [
	I1128 03:14:59.855179  356731 command_runner.go:130] >     {
	I1128 03:14:59.855186  356731 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I1128 03:14:59.855192  356731 command_runner.go:130] >       "repoTags": [
	I1128 03:14:59.855198  356731 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I1128 03:14:59.855202  356731 command_runner.go:130] >       ],
	I1128 03:14:59.855207  356731 command_runner.go:130] >       "repoDigests": [
	I1128 03:14:59.855220  356731 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I1128 03:14:59.855231  356731 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I1128 03:14:59.855240  356731 command_runner.go:130] >       ],
	I1128 03:14:59.855248  356731 command_runner.go:130] >       "size": "65258016",
	I1128 03:14:59.855257  356731 command_runner.go:130] >       "uid": null,
	I1128 03:14:59.855261  356731 command_runner.go:130] >       "username": "",
	I1128 03:14:59.855269  356731 command_runner.go:130] >       "spec": null,
	I1128 03:14:59.855273  356731 command_runner.go:130] >       "pinned": false
	I1128 03:14:59.855277  356731 command_runner.go:130] >     },
	I1128 03:14:59.855281  356731 command_runner.go:130] >     {
	I1128 03:14:59.855287  356731 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1128 03:14:59.855294  356731 command_runner.go:130] >       "repoTags": [
	I1128 03:14:59.855299  356731 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1128 03:14:59.855303  356731 command_runner.go:130] >       ],
	I1128 03:14:59.855307  356731 command_runner.go:130] >       "repoDigests": [
	I1128 03:14:59.855318  356731 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1128 03:14:59.855334  356731 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1128 03:14:59.855348  356731 command_runner.go:130] >       ],
	I1128 03:14:59.855360  356731 command_runner.go:130] >       "size": "31470524",
	I1128 03:14:59.855367  356731 command_runner.go:130] >       "uid": null,
	I1128 03:14:59.855371  356731 command_runner.go:130] >       "username": "",
	I1128 03:14:59.855376  356731 command_runner.go:130] >       "spec": null,
	I1128 03:14:59.855380  356731 command_runner.go:130] >       "pinned": false
	I1128 03:14:59.855386  356731 command_runner.go:130] >     },
	I1128 03:14:59.855390  356731 command_runner.go:130] >     {
	I1128 03:14:59.855398  356731 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I1128 03:14:59.855405  356731 command_runner.go:130] >       "repoTags": [
	I1128 03:14:59.855410  356731 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I1128 03:14:59.855419  356731 command_runner.go:130] >       ],
	I1128 03:14:59.855429  356731 command_runner.go:130] >       "repoDigests": [
	I1128 03:14:59.855442  356731 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I1128 03:14:59.855459  356731 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I1128 03:14:59.855468  356731 command_runner.go:130] >       ],
	I1128 03:14:59.855475  356731 command_runner.go:130] >       "size": "53621675",
	I1128 03:14:59.855480  356731 command_runner.go:130] >       "uid": null,
	I1128 03:14:59.855487  356731 command_runner.go:130] >       "username": "",
	I1128 03:14:59.855494  356731 command_runner.go:130] >       "spec": null,
	I1128 03:14:59.855502  356731 command_runner.go:130] >       "pinned": false
	I1128 03:14:59.855508  356731 command_runner.go:130] >     },
	I1128 03:14:59.855513  356731 command_runner.go:130] >     {
	I1128 03:14:59.855527  356731 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I1128 03:14:59.855538  356731 command_runner.go:130] >       "repoTags": [
	I1128 03:14:59.855547  356731 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I1128 03:14:59.855557  356731 command_runner.go:130] >       ],
	I1128 03:14:59.855568  356731 command_runner.go:130] >       "repoDigests": [
	I1128 03:14:59.855582  356731 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I1128 03:14:59.855599  356731 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I1128 03:14:59.855610  356731 command_runner.go:130] >       ],
	I1128 03:14:59.855617  356731 command_runner.go:130] >       "size": "295456551",
	I1128 03:14:59.855623  356731 command_runner.go:130] >       "uid": {
	I1128 03:14:59.855633  356731 command_runner.go:130] >         "value": "0"
	I1128 03:14:59.855643  356731 command_runner.go:130] >       },
	I1128 03:14:59.855651  356731 command_runner.go:130] >       "username": "",
	I1128 03:14:59.855662  356731 command_runner.go:130] >       "spec": null,
	I1128 03:14:59.855672  356731 command_runner.go:130] >       "pinned": false
	I1128 03:14:59.855681  356731 command_runner.go:130] >     },
	I1128 03:14:59.855689  356731 command_runner.go:130] >     {
	I1128 03:14:59.855703  356731 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I1128 03:14:59.855711  356731 command_runner.go:130] >       "repoTags": [
	I1128 03:14:59.855720  356731 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I1128 03:14:59.855729  356731 command_runner.go:130] >       ],
	I1128 03:14:59.855740  356731 command_runner.go:130] >       "repoDigests": [
	I1128 03:14:59.855756  356731 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I1128 03:14:59.855771  356731 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I1128 03:14:59.855780  356731 command_runner.go:130] >       ],
	I1128 03:14:59.855790  356731 command_runner.go:130] >       "size": "127226832",
	I1128 03:14:59.855800  356731 command_runner.go:130] >       "uid": {
	I1128 03:14:59.855808  356731 command_runner.go:130] >         "value": "0"
	I1128 03:14:59.855812  356731 command_runner.go:130] >       },
	I1128 03:14:59.855822  356731 command_runner.go:130] >       "username": "",
	I1128 03:14:59.855832  356731 command_runner.go:130] >       "spec": null,
	I1128 03:14:59.855843  356731 command_runner.go:130] >       "pinned": false
	I1128 03:14:59.855852  356731 command_runner.go:130] >     },
	I1128 03:14:59.855861  356731 command_runner.go:130] >     {
	I1128 03:14:59.855874  356731 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I1128 03:14:59.855884  356731 command_runner.go:130] >       "repoTags": [
	I1128 03:14:59.855894  356731 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I1128 03:14:59.855900  356731 command_runner.go:130] >       ],
	I1128 03:14:59.855907  356731 command_runner.go:130] >       "repoDigests": [
	I1128 03:14:59.855923  356731 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I1128 03:14:59.855939  356731 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I1128 03:14:59.855948  356731 command_runner.go:130] >       ],
	I1128 03:14:59.855958  356731 command_runner.go:130] >       "size": "123261750",
	I1128 03:14:59.855967  356731 command_runner.go:130] >       "uid": {
	I1128 03:14:59.855976  356731 command_runner.go:130] >         "value": "0"
	I1128 03:14:59.855983  356731 command_runner.go:130] >       },
	I1128 03:14:59.855988  356731 command_runner.go:130] >       "username": "",
	I1128 03:14:59.855998  356731 command_runner.go:130] >       "spec": null,
	I1128 03:14:59.856008  356731 command_runner.go:130] >       "pinned": false
	I1128 03:14:59.856034  356731 command_runner.go:130] >     },
	I1128 03:14:59.856045  356731 command_runner.go:130] >     {
	I1128 03:14:59.856055  356731 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I1128 03:14:59.856064  356731 command_runner.go:130] >       "repoTags": [
	I1128 03:14:59.856074  356731 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I1128 03:14:59.856078  356731 command_runner.go:130] >       ],
	I1128 03:14:59.856083  356731 command_runner.go:130] >       "repoDigests": [
	I1128 03:14:59.856098  356731 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I1128 03:14:59.856113  356731 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I1128 03:14:59.856123  356731 command_runner.go:130] >       ],
	I1128 03:14:59.856131  356731 command_runner.go:130] >       "size": "74749335",
	I1128 03:14:59.856141  356731 command_runner.go:130] >       "uid": null,
	I1128 03:14:59.856148  356731 command_runner.go:130] >       "username": "",
	I1128 03:14:59.856157  356731 command_runner.go:130] >       "spec": null,
	I1128 03:14:59.856167  356731 command_runner.go:130] >       "pinned": false
	I1128 03:14:59.856176  356731 command_runner.go:130] >     },
	I1128 03:14:59.856183  356731 command_runner.go:130] >     {
	I1128 03:14:59.856191  356731 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I1128 03:14:59.856203  356731 command_runner.go:130] >       "repoTags": [
	I1128 03:14:59.856214  356731 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I1128 03:14:59.856222  356731 command_runner.go:130] >       ],
	I1128 03:14:59.856230  356731 command_runner.go:130] >       "repoDigests": [
	I1128 03:14:59.856305  356731 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I1128 03:14:59.856323  356731 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I1128 03:14:59.856327  356731 command_runner.go:130] >       ],
	I1128 03:14:59.856332  356731 command_runner.go:130] >       "size": "61551410",
	I1128 03:14:59.856335  356731 command_runner.go:130] >       "uid": {
	I1128 03:14:59.856340  356731 command_runner.go:130] >         "value": "0"
	I1128 03:14:59.856344  356731 command_runner.go:130] >       },
	I1128 03:14:59.856348  356731 command_runner.go:130] >       "username": "",
	I1128 03:14:59.856352  356731 command_runner.go:130] >       "spec": null,
	I1128 03:14:59.856356  356731 command_runner.go:130] >       "pinned": false
	I1128 03:14:59.856363  356731 command_runner.go:130] >     },
	I1128 03:14:59.856366  356731 command_runner.go:130] >     {
	I1128 03:14:59.856375  356731 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I1128 03:14:59.856383  356731 command_runner.go:130] >       "repoTags": [
	I1128 03:14:59.856388  356731 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I1128 03:14:59.856392  356731 command_runner.go:130] >       ],
	I1128 03:14:59.856397  356731 command_runner.go:130] >       "repoDigests": [
	I1128 03:14:59.856406  356731 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I1128 03:14:59.856416  356731 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I1128 03:14:59.856429  356731 command_runner.go:130] >       ],
	I1128 03:14:59.856436  356731 command_runner.go:130] >       "size": "750414",
	I1128 03:14:59.856440  356731 command_runner.go:130] >       "uid": {
	I1128 03:14:59.856444  356731 command_runner.go:130] >         "value": "65535"
	I1128 03:14:59.856450  356731 command_runner.go:130] >       },
	I1128 03:14:59.856454  356731 command_runner.go:130] >       "username": "",
	I1128 03:14:59.856461  356731 command_runner.go:130] >       "spec": null,
	I1128 03:14:59.856465  356731 command_runner.go:130] >       "pinned": false
	I1128 03:14:59.856468  356731 command_runner.go:130] >     }
	I1128 03:14:59.856474  356731 command_runner.go:130] >   ]
	I1128 03:14:59.856477  356731 command_runner.go:130] > }
	I1128 03:14:59.856634  356731 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 03:14:59.856647  356731 cache_images.go:84] Images are preloaded, skipping loading
	I1128 03:14:59.856708  356731 ssh_runner.go:195] Run: crio config
	I1128 03:14:59.911489  356731 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1128 03:14:59.911525  356731 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1128 03:14:59.911536  356731 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1128 03:14:59.911540  356731 command_runner.go:130] > #
	I1128 03:14:59.911553  356731 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1128 03:14:59.911565  356731 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1128 03:14:59.911574  356731 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1128 03:14:59.911591  356731 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1128 03:14:59.911603  356731 command_runner.go:130] > # reload'.
	I1128 03:14:59.911616  356731 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1128 03:14:59.911630  356731 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1128 03:14:59.911640  356731 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1128 03:14:59.911655  356731 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1128 03:14:59.911661  356731 command_runner.go:130] > [crio]
	I1128 03:14:59.911672  356731 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1128 03:14:59.911681  356731 command_runner.go:130] > # containers images, in this directory.
	I1128 03:14:59.911689  356731 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1128 03:14:59.911707  356731 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1128 03:14:59.911742  356731 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1128 03:14:59.911757  356731 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1128 03:14:59.911767  356731 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1128 03:14:59.911776  356731 command_runner.go:130] > storage_driver = "overlay"
	I1128 03:14:59.911787  356731 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1128 03:14:59.911797  356731 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1128 03:14:59.911807  356731 command_runner.go:130] > storage_option = [
	I1128 03:14:59.911816  356731 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1128 03:14:59.911824  356731 command_runner.go:130] > ]
	I1128 03:14:59.911836  356731 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1128 03:14:59.911849  356731 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1128 03:14:59.911860  356731 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1128 03:14:59.911872  356731 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1128 03:14:59.911886  356731 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1128 03:14:59.911897  356731 command_runner.go:130] > # always happen on a node reboot
	I1128 03:14:59.911905  356731 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1128 03:14:59.911918  356731 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1128 03:14:59.911931  356731 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1128 03:14:59.911952  356731 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1128 03:14:59.911964  356731 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1128 03:14:59.911977  356731 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1128 03:14:59.911993  356731 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1128 03:14:59.912033  356731 command_runner.go:130] > # internal_wipe = true
	I1128 03:14:59.912051  356731 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1128 03:14:59.912061  356731 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1128 03:14:59.912071  356731 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1128 03:14:59.912080  356731 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1128 03:14:59.912101  356731 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1128 03:14:59.912108  356731 command_runner.go:130] > [crio.api]
	I1128 03:14:59.912118  356731 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1128 03:14:59.912129  356731 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1128 03:14:59.912140  356731 command_runner.go:130] > # IP address on which the stream server will listen.
	I1128 03:14:59.912151  356731 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1128 03:14:59.912162  356731 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1128 03:14:59.912173  356731 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1128 03:14:59.912182  356731 command_runner.go:130] > # stream_port = "0"
	I1128 03:14:59.912194  356731 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1128 03:14:59.912204  356731 command_runner.go:130] > # stream_enable_tls = false
	I1128 03:14:59.912214  356731 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1128 03:14:59.912226  356731 command_runner.go:130] > # stream_idle_timeout = ""
	I1128 03:14:59.912240  356731 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1128 03:14:59.912253  356731 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1128 03:14:59.912262  356731 command_runner.go:130] > # minutes.
	I1128 03:14:59.912270  356731 command_runner.go:130] > # stream_tls_cert = ""
	I1128 03:14:59.912283  356731 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1128 03:14:59.912301  356731 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1128 03:14:59.912312  356731 command_runner.go:130] > # stream_tls_key = ""
	I1128 03:14:59.912325  356731 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1128 03:14:59.912338  356731 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1128 03:14:59.912350  356731 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1128 03:14:59.912361  356731 command_runner.go:130] > # stream_tls_ca = ""
	I1128 03:14:59.912372  356731 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 03:14:59.912403  356731 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1128 03:14:59.912419  356731 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 03:14:59.912429  356731 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1128 03:14:59.912464  356731 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1128 03:14:59.912477  356731 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1128 03:14:59.912484  356731 command_runner.go:130] > [crio.runtime]
	I1128 03:14:59.912501  356731 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1128 03:14:59.912510  356731 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1128 03:14:59.912520  356731 command_runner.go:130] > # "nofile=1024:2048"
	I1128 03:14:59.912533  356731 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1128 03:14:59.912542  356731 command_runner.go:130] > # default_ulimits = [
	I1128 03:14:59.912555  356731 command_runner.go:130] > # ]
	I1128 03:14:59.912570  356731 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1128 03:14:59.912580  356731 command_runner.go:130] > # no_pivot = false
	I1128 03:14:59.912588  356731 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1128 03:14:59.912602  356731 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1128 03:14:59.912613  356731 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1128 03:14:59.912624  356731 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1128 03:14:59.912636  356731 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1128 03:14:59.912648  356731 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 03:14:59.912659  356731 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1128 03:14:59.912667  356731 command_runner.go:130] > # Cgroup setting for conmon
	I1128 03:14:59.912678  356731 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1128 03:14:59.912688  356731 command_runner.go:130] > conmon_cgroup = "pod"
	I1128 03:14:59.912699  356731 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1128 03:14:59.912711  356731 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1128 03:14:59.912725  356731 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 03:14:59.912735  356731 command_runner.go:130] > conmon_env = [
	I1128 03:14:59.912748  356731 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1128 03:14:59.912760  356731 command_runner.go:130] > ]
	I1128 03:14:59.912773  356731 command_runner.go:130] > # Additional environment variables to set for all the
	I1128 03:14:59.912782  356731 command_runner.go:130] > # containers. These are overridden if set in the
	I1128 03:14:59.912795  356731 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1128 03:14:59.912805  356731 command_runner.go:130] > # default_env = [
	I1128 03:14:59.912812  356731 command_runner.go:130] > # ]
	I1128 03:14:59.912822  356731 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1128 03:14:59.912832  356731 command_runner.go:130] > # selinux = false
	I1128 03:14:59.912843  356731 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1128 03:14:59.912857  356731 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1128 03:14:59.912869  356731 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1128 03:14:59.912888  356731 command_runner.go:130] > # seccomp_profile = ""
	I1128 03:14:59.912899  356731 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1128 03:14:59.912912  356731 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1128 03:14:59.912925  356731 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1128 03:14:59.912936  356731 command_runner.go:130] > # which might increase security.
	I1128 03:14:59.912947  356731 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1128 03:14:59.912960  356731 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1128 03:14:59.912979  356731 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1128 03:14:59.913072  356731 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1128 03:14:59.913091  356731 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1128 03:14:59.913100  356731 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:14:59.913108  356731 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1128 03:14:59.913118  356731 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1128 03:14:59.913125  356731 command_runner.go:130] > # the cgroup blockio controller.
	I1128 03:14:59.913134  356731 command_runner.go:130] > # blockio_config_file = ""
	I1128 03:14:59.913148  356731 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1128 03:14:59.913160  356731 command_runner.go:130] > # irqbalance daemon.
	I1128 03:14:59.913170  356731 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1128 03:14:59.913184  356731 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1128 03:14:59.913195  356731 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:14:59.913202  356731 command_runner.go:130] > # rdt_config_file = ""
	I1128 03:14:59.913215  356731 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1128 03:14:59.913225  356731 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1128 03:14:59.913237  356731 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1128 03:14:59.913248  356731 command_runner.go:130] > # separate_pull_cgroup = ""
	I1128 03:14:59.913264  356731 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1128 03:14:59.913278  356731 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1128 03:14:59.913288  356731 command_runner.go:130] > # will be added.
	I1128 03:14:59.913295  356731 command_runner.go:130] > # default_capabilities = [
	I1128 03:14:59.913305  356731 command_runner.go:130] > # 	"CHOWN",
	I1128 03:14:59.913312  356731 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1128 03:14:59.913322  356731 command_runner.go:130] > # 	"FSETID",
	I1128 03:14:59.913329  356731 command_runner.go:130] > # 	"FOWNER",
	I1128 03:14:59.913339  356731 command_runner.go:130] > # 	"SETGID",
	I1128 03:14:59.913345  356731 command_runner.go:130] > # 	"SETUID",
	I1128 03:14:59.913355  356731 command_runner.go:130] > # 	"SETPCAP",
	I1128 03:14:59.913362  356731 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1128 03:14:59.913372  356731 command_runner.go:130] > # 	"KILL",
	I1128 03:14:59.913377  356731 command_runner.go:130] > # ]
	I1128 03:14:59.913388  356731 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1128 03:14:59.913400  356731 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 03:14:59.913417  356731 command_runner.go:130] > # default_sysctls = [
	I1128 03:14:59.913426  356731 command_runner.go:130] > # ]
	I1128 03:14:59.913438  356731 command_runner.go:130] > # List of devices on the host that a
	I1128 03:14:59.913452  356731 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1128 03:14:59.913462  356731 command_runner.go:130] > # allowed_devices = [
	I1128 03:14:59.913471  356731 command_runner.go:130] > # 	"/dev/fuse",
	I1128 03:14:59.913479  356731 command_runner.go:130] > # ]
	I1128 03:14:59.913488  356731 command_runner.go:130] > # List of additional devices. specified as
	I1128 03:14:59.913503  356731 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1128 03:14:59.913516  356731 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1128 03:14:59.913554  356731 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 03:14:59.913565  356731 command_runner.go:130] > # additional_devices = [
	I1128 03:14:59.913571  356731 command_runner.go:130] > # ]
	I1128 03:14:59.913582  356731 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1128 03:14:59.913593  356731 command_runner.go:130] > # cdi_spec_dirs = [
	I1128 03:14:59.913600  356731 command_runner.go:130] > # 	"/etc/cdi",
	I1128 03:14:59.913611  356731 command_runner.go:130] > # 	"/var/run/cdi",
	I1128 03:14:59.913617  356731 command_runner.go:130] > # ]
	I1128 03:14:59.913631  356731 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1128 03:14:59.913644  356731 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1128 03:14:59.913657  356731 command_runner.go:130] > # Defaults to false.
	I1128 03:14:59.913669  356731 command_runner.go:130] > # device_ownership_from_security_context = false
	I1128 03:14:59.913681  356731 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1128 03:14:59.913694  356731 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1128 03:14:59.913704  356731 command_runner.go:130] > # hooks_dir = [
	I1128 03:14:59.913712  356731 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1128 03:14:59.913721  356731 command_runner.go:130] > # ]
	I1128 03:14:59.913731  356731 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1128 03:14:59.913745  356731 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1128 03:14:59.913757  356731 command_runner.go:130] > # its default mounts from the following two files:
	I1128 03:14:59.913766  356731 command_runner.go:130] > #
	I1128 03:14:59.913777  356731 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1128 03:14:59.913793  356731 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1128 03:14:59.913806  356731 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1128 03:14:59.913812  356731 command_runner.go:130] > #
	I1128 03:14:59.913825  356731 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1128 03:14:59.913838  356731 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1128 03:14:59.913852  356731 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1128 03:14:59.913868  356731 command_runner.go:130] > #      only add mounts it finds in this file.
	I1128 03:14:59.913875  356731 command_runner.go:130] > #
	I1128 03:14:59.913906  356731 command_runner.go:130] > # default_mounts_file = ""
	I1128 03:14:59.913919  356731 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1128 03:14:59.913930  356731 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1128 03:14:59.913941  356731 command_runner.go:130] > pids_limit = 1024
	I1128 03:14:59.913951  356731 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1128 03:14:59.913964  356731 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1128 03:14:59.913977  356731 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1128 03:14:59.913993  356731 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1128 03:14:59.914003  356731 command_runner.go:130] > # log_size_max = -1
	I1128 03:14:59.914017  356731 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1128 03:14:59.914035  356731 command_runner.go:130] > # log_to_journald = false
	I1128 03:14:59.914049  356731 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1128 03:14:59.914060  356731 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1128 03:14:59.914069  356731 command_runner.go:130] > # Path to directory for container attach sockets.
	I1128 03:14:59.914085  356731 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1128 03:14:59.914097  356731 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1128 03:14:59.914108  356731 command_runner.go:130] > # bind_mount_prefix = ""
	I1128 03:14:59.914121  356731 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1128 03:14:59.914132  356731 command_runner.go:130] > # read_only = false
	I1128 03:14:59.914143  356731 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1128 03:14:59.914156  356731 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1128 03:14:59.914167  356731 command_runner.go:130] > # live configuration reload.
	I1128 03:14:59.914176  356731 command_runner.go:130] > # log_level = "info"
	I1128 03:14:59.914186  356731 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1128 03:14:59.914198  356731 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:14:59.914205  356731 command_runner.go:130] > # log_filter = ""
	I1128 03:14:59.914219  356731 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1128 03:14:59.914233  356731 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1128 03:14:59.914242  356731 command_runner.go:130] > # separated by comma.
	I1128 03:14:59.914250  356731 command_runner.go:130] > # uid_mappings = ""
	I1128 03:14:59.914263  356731 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1128 03:14:59.914276  356731 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1128 03:14:59.914286  356731 command_runner.go:130] > # separated by comma.
	I1128 03:14:59.914297  356731 command_runner.go:130] > # gid_mappings = ""
	I1128 03:14:59.914317  356731 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1128 03:14:59.914330  356731 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 03:14:59.914348  356731 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 03:14:59.914359  356731 command_runner.go:130] > # minimum_mappable_uid = -1
	I1128 03:14:59.914372  356731 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1128 03:14:59.914385  356731 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 03:14:59.914397  356731 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 03:14:59.914401  356731 command_runner.go:130] > # minimum_mappable_gid = -1
	I1128 03:14:59.914409  356731 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1128 03:14:59.914416  356731 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1128 03:14:59.914424  356731 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1128 03:14:59.914428  356731 command_runner.go:130] > # ctr_stop_timeout = 30
	I1128 03:14:59.914436  356731 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1128 03:14:59.914442  356731 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1128 03:14:59.914449  356731 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1128 03:14:59.914454  356731 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1128 03:14:59.914459  356731 command_runner.go:130] > drop_infra_ctr = false
	I1128 03:14:59.914465  356731 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1128 03:14:59.914475  356731 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1128 03:14:59.914484  356731 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1128 03:14:59.914490  356731 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1128 03:14:59.914496  356731 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1128 03:14:59.914501  356731 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1128 03:14:59.914506  356731 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1128 03:14:59.914515  356731 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1128 03:14:59.914523  356731 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1128 03:14:59.914529  356731 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1128 03:14:59.914541  356731 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1128 03:14:59.914555  356731 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1128 03:14:59.914563  356731 command_runner.go:130] > # default_runtime = "runc"
	I1128 03:14:59.914568  356731 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1128 03:14:59.914582  356731 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1128 03:14:59.914597  356731 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1128 03:14:59.914607  356731 command_runner.go:130] > # creation as a file is not desired either.
	I1128 03:14:59.914615  356731 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1128 03:14:59.914622  356731 command_runner.go:130] > # the hostname is being managed dynamically.
	I1128 03:14:59.914652  356731 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1128 03:14:59.914659  356731 command_runner.go:130] > # ]
	I1128 03:14:59.914670  356731 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1128 03:14:59.914680  356731 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1128 03:14:59.914694  356731 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1128 03:14:59.914706  356731 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1128 03:14:59.914714  356731 command_runner.go:130] > #
	I1128 03:14:59.914721  356731 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1128 03:14:59.914733  356731 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1128 03:14:59.914744  356731 command_runner.go:130] > #  runtime_type = "oci"
	I1128 03:14:59.914757  356731 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1128 03:14:59.914769  356731 command_runner.go:130] > #  privileged_without_host_devices = false
	I1128 03:14:59.914777  356731 command_runner.go:130] > #  allowed_annotations = []
	I1128 03:14:59.914785  356731 command_runner.go:130] > # Where:
	I1128 03:14:59.914792  356731 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1128 03:14:59.914801  356731 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1128 03:14:59.914807  356731 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1128 03:14:59.914815  356731 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1128 03:14:59.914825  356731 command_runner.go:130] > #   in $PATH.
	I1128 03:14:59.914834  356731 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1128 03:14:59.914839  356731 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1128 03:14:59.914847  356731 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1128 03:14:59.914851  356731 command_runner.go:130] > #   state.
	I1128 03:14:59.914857  356731 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1128 03:14:59.914865  356731 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1128 03:14:59.914874  356731 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1128 03:14:59.914886  356731 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1128 03:14:59.914900  356731 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1128 03:14:59.914914  356731 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1128 03:14:59.914925  356731 command_runner.go:130] > #   The currently recognized values are:
	I1128 03:14:59.914938  356731 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1128 03:14:59.914954  356731 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1128 03:14:59.914967  356731 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1128 03:14:59.914977  356731 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1128 03:14:59.914992  356731 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1128 03:14:59.915004  356731 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1128 03:14:59.915015  356731 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1128 03:14:59.915024  356731 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1128 03:14:59.915033  356731 command_runner.go:130] > #   should be moved to the container's cgroup
	I1128 03:14:59.915040  356731 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1128 03:14:59.915044  356731 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1128 03:14:59.915049  356731 command_runner.go:130] > runtime_type = "oci"
	I1128 03:14:59.915053  356731 command_runner.go:130] > runtime_root = "/run/runc"
	I1128 03:14:59.915058  356731 command_runner.go:130] > runtime_config_path = ""
	I1128 03:14:59.915063  356731 command_runner.go:130] > monitor_path = ""
	I1128 03:14:59.915073  356731 command_runner.go:130] > monitor_cgroup = ""
	I1128 03:14:59.915080  356731 command_runner.go:130] > monitor_exec_cgroup = ""
	I1128 03:14:59.915094  356731 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1128 03:14:59.915104  356731 command_runner.go:130] > # running containers
	I1128 03:14:59.915112  356731 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1128 03:14:59.915126  356731 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1128 03:14:59.915199  356731 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1128 03:14:59.915213  356731 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1128 03:14:59.915225  356731 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1128 03:14:59.915239  356731 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1128 03:14:59.915250  356731 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1128 03:14:59.915261  356731 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1128 03:14:59.915273  356731 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1128 03:14:59.915281  356731 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1128 03:14:59.915292  356731 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1128 03:14:59.915305  356731 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1128 03:14:59.915316  356731 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1128 03:14:59.915332  356731 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1128 03:14:59.915347  356731 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1128 03:14:59.915359  356731 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1128 03:14:59.915375  356731 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1128 03:14:59.915388  356731 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1128 03:14:59.915401  356731 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1128 03:14:59.915414  356731 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1128 03:14:59.915424  356731 command_runner.go:130] > # Example:
	I1128 03:14:59.915455  356731 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1128 03:14:59.915468  356731 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1128 03:14:59.915487  356731 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1128 03:14:59.915499  356731 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1128 03:14:59.915510  356731 command_runner.go:130] > # cpuset = 0
	I1128 03:14:59.915519  356731 command_runner.go:130] > # cpushares = "0-1"
	I1128 03:14:59.915528  356731 command_runner.go:130] > # Where:
	I1128 03:14:59.915540  356731 command_runner.go:130] > # The workload name is workload-type.
	I1128 03:14:59.915554  356731 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1128 03:14:59.915567  356731 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1128 03:14:59.915579  356731 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1128 03:14:59.915596  356731 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1128 03:14:59.915609  356731 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1128 03:14:59.915618  356731 command_runner.go:130] > # 
	I1128 03:14:59.915632  356731 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1128 03:14:59.915641  356731 command_runner.go:130] > #
	I1128 03:14:59.915653  356731 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1128 03:14:59.915663  356731 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1128 03:14:59.915676  356731 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1128 03:14:59.915690  356731 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1128 03:14:59.915707  356731 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1128 03:14:59.915716  356731 command_runner.go:130] > [crio.image]
	I1128 03:14:59.915729  356731 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1128 03:14:59.915743  356731 command_runner.go:130] > # default_transport = "docker://"
	I1128 03:14:59.915754  356731 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1128 03:14:59.915764  356731 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1128 03:14:59.915774  356731 command_runner.go:130] > # global_auth_file = ""
	I1128 03:14:59.915786  356731 command_runner.go:130] > # The image used to instantiate infra containers.
	I1128 03:14:59.915798  356731 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:14:59.915810  356731 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1128 03:14:59.915824  356731 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1128 03:14:59.915837  356731 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1128 03:14:59.915848  356731 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:14:59.915856  356731 command_runner.go:130] > # pause_image_auth_file = ""
	I1128 03:14:59.915865  356731 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1128 03:14:59.915879  356731 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1128 03:14:59.915892  356731 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1128 03:14:59.915905  356731 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1128 03:14:59.915919  356731 command_runner.go:130] > # pause_command = "/pause"
	I1128 03:14:59.915932  356731 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1128 03:14:59.915942  356731 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1128 03:14:59.915953  356731 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1128 03:14:59.915967  356731 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1128 03:14:59.915979  356731 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1128 03:14:59.915989  356731 command_runner.go:130] > # signature_policy = ""
	I1128 03:14:59.915999  356731 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1128 03:14:59.916009  356731 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1128 03:14:59.916016  356731 command_runner.go:130] > # changing them here.
	I1128 03:14:59.916023  356731 command_runner.go:130] > # insecure_registries = [
	I1128 03:14:59.916026  356731 command_runner.go:130] > # ]
	I1128 03:14:59.916040  356731 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1128 03:14:59.916049  356731 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1128 03:14:59.916057  356731 command_runner.go:130] > # image_volumes = "mkdir"
	I1128 03:14:59.916065  356731 command_runner.go:130] > # Temporary directory to use for storing big files
	I1128 03:14:59.916073  356731 command_runner.go:130] > # big_files_temporary_dir = ""
	I1128 03:14:59.916082  356731 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1128 03:14:59.916091  356731 command_runner.go:130] > # CNI plugins.
	I1128 03:14:59.916097  356731 command_runner.go:130] > [crio.network]
	I1128 03:14:59.916106  356731 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1128 03:14:59.916112  356731 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1128 03:14:59.916116  356731 command_runner.go:130] > # cni_default_network = ""
	I1128 03:14:59.916125  356731 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1128 03:14:59.916133  356731 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1128 03:14:59.916143  356731 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1128 03:14:59.916154  356731 command_runner.go:130] > # plugin_dirs = [
	I1128 03:14:59.916163  356731 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1128 03:14:59.916172  356731 command_runner.go:130] > # ]
	I1128 03:14:59.916184  356731 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1128 03:14:59.916194  356731 command_runner.go:130] > [crio.metrics]
	I1128 03:14:59.916202  356731 command_runner.go:130] > # Globally enable or disable metrics support.
	I1128 03:14:59.916208  356731 command_runner.go:130] > enable_metrics = true
	I1128 03:14:59.916219  356731 command_runner.go:130] > # Specify enabled metrics collectors.
	I1128 03:14:59.916231  356731 command_runner.go:130] > # Per default all metrics are enabled.
	I1128 03:14:59.916242  356731 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1128 03:14:59.916259  356731 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1128 03:14:59.916271  356731 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1128 03:14:59.916284  356731 command_runner.go:130] > # metrics_collectors = [
	I1128 03:14:59.916295  356731 command_runner.go:130] > # 	"operations",
	I1128 03:14:59.916306  356731 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1128 03:14:59.916317  356731 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1128 03:14:59.916327  356731 command_runner.go:130] > # 	"operations_errors",
	I1128 03:14:59.916338  356731 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1128 03:14:59.916369  356731 command_runner.go:130] > # 	"image_pulls_by_name",
	I1128 03:14:59.916384  356731 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1128 03:14:59.916391  356731 command_runner.go:130] > # 	"image_pulls_failures",
	I1128 03:14:59.916404  356731 command_runner.go:130] > # 	"image_pulls_successes",
	I1128 03:14:59.916412  356731 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1128 03:14:59.916420  356731 command_runner.go:130] > # 	"image_layer_reuse",
	I1128 03:14:59.916431  356731 command_runner.go:130] > # 	"containers_oom_total",
	I1128 03:14:59.916438  356731 command_runner.go:130] > # 	"containers_oom",
	I1128 03:14:59.916445  356731 command_runner.go:130] > # 	"processes_defunct",
	I1128 03:14:59.916460  356731 command_runner.go:130] > # 	"operations_total",
	I1128 03:14:59.916469  356731 command_runner.go:130] > # 	"operations_latency_seconds",
	I1128 03:14:59.916481  356731 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1128 03:14:59.916491  356731 command_runner.go:130] > # 	"operations_errors_total",
	I1128 03:14:59.916501  356731 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1128 03:14:59.916509  356731 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1128 03:14:59.916520  356731 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1128 03:14:59.916527  356731 command_runner.go:130] > # 	"image_pulls_success_total",
	I1128 03:14:59.916537  356731 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1128 03:14:59.916541  356731 command_runner.go:130] > # 	"containers_oom_count_total",
	I1128 03:14:59.916547  356731 command_runner.go:130] > # ]
	I1128 03:14:59.916556  356731 command_runner.go:130] > # The port on which the metrics server will listen.
	I1128 03:14:59.916563  356731 command_runner.go:130] > # metrics_port = 9090
	I1128 03:14:59.916568  356731 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1128 03:14:59.916574  356731 command_runner.go:130] > # metrics_socket = ""
	I1128 03:14:59.916580  356731 command_runner.go:130] > # The certificate for the secure metrics server.
	I1128 03:14:59.916588  356731 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1128 03:14:59.916594  356731 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1128 03:14:59.916601  356731 command_runner.go:130] > # certificate on any modification event.
	I1128 03:14:59.916608  356731 command_runner.go:130] > # metrics_cert = ""
	I1128 03:14:59.916615  356731 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1128 03:14:59.916620  356731 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1128 03:14:59.916626  356731 command_runner.go:130] > # metrics_key = ""
	I1128 03:14:59.916632  356731 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1128 03:14:59.916638  356731 command_runner.go:130] > [crio.tracing]
	I1128 03:14:59.916644  356731 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1128 03:14:59.916650  356731 command_runner.go:130] > # enable_tracing = false
	I1128 03:14:59.916656  356731 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1128 03:14:59.916663  356731 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1128 03:14:59.916667  356731 command_runner.go:130] > # Number of samples to collect per million spans.
	I1128 03:14:59.916674  356731 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1128 03:14:59.916680  356731 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1128 03:14:59.916686  356731 command_runner.go:130] > [crio.stats]
	I1128 03:14:59.916692  356731 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1128 03:14:59.916700  356731 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1128 03:14:59.916704  356731 command_runner.go:130] > # stats_collection_period = 0
	I1128 03:14:59.916732  356731 command_runner.go:130] ! time="2023-11-28 03:14:59.852210407Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1128 03:14:59.916747  356731 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1128 03:14:59.916823  356731 cni.go:84] Creating CNI manager for ""
	I1128 03:14:59.916835  356731 cni.go:136] 3 nodes found, recommending kindnet
	I1128 03:14:59.916858  356731 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 03:14:59.916892  356731 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.73 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-112998 NodeName:multinode-112998 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.73 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 03:14:59.917038  356731 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.73
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-112998"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.73
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 03:14:59.917116  356731 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-112998 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.73
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-112998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 03:14:59.917173  356731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 03:14:59.925584  356731 command_runner.go:130] > kubeadm
	I1128 03:14:59.925604  356731 command_runner.go:130] > kubectl
	I1128 03:14:59.925610  356731 command_runner.go:130] > kubelet
	I1128 03:14:59.925717  356731 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 03:14:59.925828  356731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 03:14:59.933732  356731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1128 03:14:59.949353  356731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 03:14:59.964693  356731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I1128 03:14:59.981132  356731 ssh_runner.go:195] Run: grep 192.168.39.73	control-plane.minikube.internal$ /etc/hosts
	I1128 03:14:59.984798  356731 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.73	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 03:14:59.996993  356731 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998 for IP: 192.168.39.73
	I1128 03:14:59.997039  356731 certs.go:190] acquiring lock for shared ca certs: {Name:mk57c0483467fb0022a439f1b546194ca653d1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:14:59.997231  356731 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key
	I1128 03:14:59.997292  356731 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key
	I1128 03:14:59.997395  356731 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key
	I1128 03:14:59.997482  356731 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.key.8b49dc8b
	I1128 03:14:59.997549  356731 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/proxy-client.key
	I1128 03:14:59.997563  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1128 03:14:59.997584  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1128 03:14:59.997603  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1128 03:14:59.997619  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1128 03:14:59.997637  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1128 03:14:59.997655  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1128 03:14:59.997672  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1128 03:14:59.997690  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1128 03:14:59.997752  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem (1338 bytes)
	W1128 03:14:59.997791  356731 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515_empty.pem, impossibly tiny 0 bytes
	I1128 03:14:59.997811  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 03:14:59.997840  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem (1078 bytes)
	I1128 03:14:59.997881  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem (1123 bytes)
	I1128 03:14:59.997920  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem (1675 bytes)
	I1128 03:14:59.997979  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 03:14:59.998020  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem -> /usr/share/ca-certificates/340515.pem
	I1128 03:14:59.998045  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> /usr/share/ca-certificates/3405152.pem
	I1128 03:14:59.998063  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:14:59.998738  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 03:15:00.021850  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1128 03:15:00.044756  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 03:15:00.068365  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1128 03:15:00.090433  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 03:15:00.113992  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 03:15:00.138093  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 03:15:00.162558  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 03:15:00.186950  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem --> /usr/share/ca-certificates/340515.pem (1338 bytes)
	I1128 03:15:00.212172  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /usr/share/ca-certificates/3405152.pem (1708 bytes)
	I1128 03:15:00.239085  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 03:15:00.265275  356731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 03:15:00.282430  356731 ssh_runner.go:195] Run: openssl version
	I1128 03:15:00.287583  356731 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1128 03:15:00.287919  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340515.pem && ln -fs /usr/share/ca-certificates/340515.pem /etc/ssl/certs/340515.pem"
	I1128 03:15:00.297422  356731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340515.pem
	I1128 03:15:00.301756  356731 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 03:15:00.301812  356731 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 03:15:00.301882  356731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340515.pem
	I1128 03:15:00.307257  356731 command_runner.go:130] > 51391683
	I1128 03:15:00.307495  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340515.pem /etc/ssl/certs/51391683.0"
	I1128 03:15:00.317145  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3405152.pem && ln -fs /usr/share/ca-certificates/3405152.pem /etc/ssl/certs/3405152.pem"
	I1128 03:15:00.328330  356731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3405152.pem
	I1128 03:15:00.332953  356731 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 03:15:00.333019  356731 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 03:15:00.333081  356731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3405152.pem
	I1128 03:15:00.338665  356731 command_runner.go:130] > 3ec20f2e
	I1128 03:15:00.338737  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3405152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 03:15:00.348231  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 03:15:00.357673  356731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:15:00.361997  356731 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:15:00.362023  356731 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:15:00.362057  356731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:15:00.367137  356731 command_runner.go:130] > b5213941
	I1128 03:15:00.367400  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 03:15:00.376482  356731 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 03:15:00.380758  356731 command_runner.go:130] > ca.crt
	I1128 03:15:00.380782  356731 command_runner.go:130] > ca.key
	I1128 03:15:00.380791  356731 command_runner.go:130] > healthcheck-client.crt
	I1128 03:15:00.380797  356731 command_runner.go:130] > healthcheck-client.key
	I1128 03:15:00.380809  356731 command_runner.go:130] > peer.crt
	I1128 03:15:00.380812  356731 command_runner.go:130] > peer.key
	I1128 03:15:00.380818  356731 command_runner.go:130] > server.crt
	I1128 03:15:00.380824  356731 command_runner.go:130] > server.key
	I1128 03:15:00.380897  356731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 03:15:00.386590  356731 command_runner.go:130] > Certificate will not expire
	I1128 03:15:00.386655  356731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 03:15:00.391978  356731 command_runner.go:130] > Certificate will not expire
	I1128 03:15:00.392398  356731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 03:15:00.397999  356731 command_runner.go:130] > Certificate will not expire
	I1128 03:15:00.398070  356731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 03:15:00.404452  356731 command_runner.go:130] > Certificate will not expire
	I1128 03:15:00.404796  356731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 03:15:00.410599  356731 command_runner.go:130] > Certificate will not expire
	I1128 03:15:00.410672  356731 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 03:15:00.415865  356731 command_runner.go:130] > Certificate will not expire
	I1128 03:15:00.416170  356731 kubeadm.go:404] StartCluster: {Name:multinode-112998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-112998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.192 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 03:15:00.416280  356731 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 03:15:00.416366  356731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 03:15:00.451788  356731 cri.go:89] found id: ""
	I1128 03:15:00.451915  356731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 03:15:00.461102  356731 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1128 03:15:00.461134  356731 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1128 03:15:00.461142  356731 command_runner.go:130] > /var/lib/minikube/etcd:
	I1128 03:15:00.461147  356731 command_runner.go:130] > member
	I1128 03:15:00.461168  356731 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 03:15:00.461179  356731 kubeadm.go:636] restartCluster start
	I1128 03:15:00.461236  356731 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 03:15:00.469906  356731 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:00.470468  356731 kubeconfig.go:92] found "multinode-112998" server: "https://192.168.39.73:8443"
	I1128 03:15:00.470922  356731 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:15:00.471249  356731 kapi.go:59] client config for multinode-112998: &rest.Config{Host:"https://192.168.39.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 03:15:00.471934  356731 cert_rotation.go:137] Starting client certificate rotation controller
	I1128 03:15:00.472276  356731 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 03:15:00.480602  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:00.480653  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:00.491822  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:00.491841  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:00.491892  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:00.502259  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:01.002958  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:01.003046  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:01.015089  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:01.502841  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:01.746104  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:01.757359  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:02.002736  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:02.002825  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:02.014949  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:02.502474  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:02.502603  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:02.514765  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:03.002351  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:03.002472  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:03.014320  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:03.502913  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:03.503023  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:03.514287  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:04.002846  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:04.002956  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:04.014165  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:04.502660  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:04.502769  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:04.514777  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:05.002952  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:05.003041  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:05.014710  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:05.503314  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:05.503395  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:05.514356  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:06.002925  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:06.003018  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:06.013843  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:06.502687  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:06.502795  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:06.514502  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:07.002743  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:07.002840  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:07.014842  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:07.502382  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:07.502510  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:07.514492  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:08.003191  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:08.003311  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:08.015205  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:08.502753  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:08.502917  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:08.514042  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:09.002588  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:09.002707  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:09.013772  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:09.503369  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:09.503466  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:09.514473  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:10.002622  356731 api_server.go:166] Checking apiserver status ...
	I1128 03:15:10.002730  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 03:15:10.013721  356731 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 03:15:10.481435  356731 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 03:15:10.481471  356731 kubeadm.go:1128] stopping kube-system containers ...
	I1128 03:15:10.481487  356731 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 03:15:10.481589  356731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 03:15:10.520821  356731 cri.go:89] found id: ""
	I1128 03:15:10.520917  356731 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 03:15:10.536455  356731 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 03:15:10.544874  356731 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I1128 03:15:10.544923  356731 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I1128 03:15:10.544935  356731 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I1128 03:15:10.544947  356731 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 03:15:10.544992  356731 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 03:15:10.545052  356731 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 03:15:10.553296  356731 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 03:15:10.553325  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 03:15:10.682222  356731 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 03:15:10.683614  356731 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1128 03:15:10.685021  356731 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1128 03:15:10.686448  356731 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 03:15:10.687373  356731 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1128 03:15:10.687849  356731 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1128 03:15:10.688780  356731 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1128 03:15:10.693966  356731 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1128 03:15:10.695634  356731 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1128 03:15:10.696129  356731 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 03:15:10.697390  356731 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 03:15:10.698073  356731 command_runner.go:130] > [certs] Using the existing "sa" key
	I1128 03:15:10.699423  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 03:15:10.755123  356731 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 03:15:10.923018  356731 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 03:15:11.117979  356731 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 03:15:11.338152  356731 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 03:15:11.710176  356731 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 03:15:11.713451  356731 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.014000417s)
	I1128 03:15:11.713481  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 03:15:11.778548  356731 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 03:15:11.781361  356731 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 03:15:11.781925  356731 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1128 03:15:11.909650  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 03:15:11.971681  356731 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 03:15:11.971706  356731 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 03:15:11.971712  356731 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 03:15:11.971727  356731 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 03:15:11.971757  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 03:15:12.053450  356731 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 03:15:12.053493  356731 api_server.go:52] waiting for apiserver process to appear ...
	I1128 03:15:12.053550  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 03:15:12.066052  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 03:15:12.577517  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 03:15:13.077086  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 03:15:13.577875  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 03:15:14.077347  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 03:15:14.577462  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 03:15:14.603186  356731 command_runner.go:130] > 1076
	I1128 03:15:14.603296  356731 api_server.go:72] duration metric: took 2.549798113s to wait for apiserver process to appear ...
	I1128 03:15:14.603320  356731 api_server.go:88] waiting for apiserver healthz status ...
	I1128 03:15:14.603336  356731 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I1128 03:15:18.836145  356731 api_server.go:279] https://192.168.39.73:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 03:15:18.836180  356731 api_server.go:103] status: https://192.168.39.73:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 03:15:18.836197  356731 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I1128 03:15:18.901336  356731 api_server.go:279] https://192.168.39.73:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 03:15:18.901379  356731 api_server.go:103] status: https://192.168.39.73:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 03:15:19.402046  356731 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I1128 03:15:19.411437  356731 api_server.go:279] https://192.168.39.73:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 03:15:19.411482  356731 api_server.go:103] status: https://192.168.39.73:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 03:15:19.901584  356731 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I1128 03:15:19.911977  356731 api_server.go:279] https://192.168.39.73:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 03:15:19.912015  356731 api_server.go:103] status: https://192.168.39.73:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 03:15:20.401577  356731 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I1128 03:15:20.407308  356731 api_server.go:279] https://192.168.39.73:8443/healthz returned 200:
	ok
	I1128 03:15:20.407458  356731 round_trippers.go:463] GET https://192.168.39.73:8443/version
	I1128 03:15:20.407470  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:20.407479  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:20.407485  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:20.417351  356731 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I1128 03:15:20.417380  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:20.417392  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:20.417401  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:20.417409  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:20.417417  356731 round_trippers.go:580]     Content-Length: 264
	I1128 03:15:20.417432  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:20 GMT
	I1128 03:15:20.417440  356731 round_trippers.go:580]     Audit-Id: 7595d46d-a8ca-4976-9248-b10d1d3898c5
	I1128 03:15:20.417452  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:20.417484  356731 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1128 03:15:20.417605  356731 api_server.go:141] control plane version: v1.28.4
	I1128 03:15:20.417631  356731 api_server.go:131] duration metric: took 5.814304237s to wait for apiserver health ...
	I1128 03:15:20.417647  356731 cni.go:84] Creating CNI manager for ""
	I1128 03:15:20.417658  356731 cni.go:136] 3 nodes found, recommending kindnet
	I1128 03:15:20.419375  356731 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1128 03:15:20.420751  356731 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 03:15:20.427667  356731 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1128 03:15:20.427691  356731 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1128 03:15:20.427711  356731 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1128 03:15:20.427725  356731 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 03:15:20.427738  356731 command_runner.go:130] > Access: 2023-11-28 03:14:47.716335571 +0000
	I1128 03:15:20.427749  356731 command_runner.go:130] > Modify: 2023-11-16 19:19:18.000000000 +0000
	I1128 03:15:20.427760  356731 command_runner.go:130] > Change: 2023-11-28 03:14:45.792335571 +0000
	I1128 03:15:20.427769  356731 command_runner.go:130] >  Birth: -
	I1128 03:15:20.427875  356731 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 03:15:20.427895  356731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 03:15:20.448228  356731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 03:15:21.473649  356731 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1128 03:15:21.473686  356731 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1128 03:15:21.473697  356731 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1128 03:15:21.473710  356731 command_runner.go:130] > daemonset.apps/kindnet configured
	I1128 03:15:21.473737  356731 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.025479618s)
	I1128 03:15:21.473768  356731 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 03:15:21.473921  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods
	I1128 03:15:21.473935  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:21.473950  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:21.473965  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:21.481949  356731 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1128 03:15:21.481996  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:21.482004  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:21.482013  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:21 GMT
	I1128 03:15:21.482022  356731 round_trippers.go:580]     Audit-Id: 19068ef0-997a-4d32-b3cb-5968d07fb69d
	I1128 03:15:21.482031  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:21.482040  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:21.482052  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:21.486190  356731 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"861"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"802","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83167 chars]
	I1128 03:15:21.490199  356731 system_pods.go:59] 12 kube-system pods found
	I1128 03:15:21.490242  356731 system_pods.go:61] "coredns-5dd5756b68-sd64m" [0d5cae9f-6647-42f9-a8e7-1f14dc9fa422] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 03:15:21.490254  356731 system_pods.go:61] "etcd-multinode-112998" [d09c5f66-0756-4402-ae0e-3b10c34e059c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 03:15:21.490264  356731 system_pods.go:61] "kindnet-587m7" [1f3794af-43a9-411f-8c8c-edf00787e1dc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1128 03:15:21.490271  356731 system_pods.go:61] "kindnet-5pfcd" [370f4bc7-f3dd-456e-b67a-fff569e42ac1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1128 03:15:21.490280  356731 system_pods.go:61] "kindnet-v2g52" [3d07ef2d-2b7b-4766-872e-6a1d8d2ec219] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1128 03:15:21.490289  356731 system_pods.go:61] "kube-apiserver-multinode-112998" [2191c8f0-3de1-4415-9bc9-b5dc50008609] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 03:15:21.490305  356731 system_pods.go:61] "kube-controller-manager-multinode-112998" [9c108920-a3e5-4377-96a3-97a4538555a0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 03:15:21.490319  356731 system_pods.go:61] "kube-proxy-bm5x4" [c478a3ff-3c8e-4f10-88c1-2b6f62b1699d] Running
	I1128 03:15:21.490334  356731 system_pods.go:61] "kube-proxy-bmr6b" [0d9b86f2-025d-424d-a66f-ad3255685aca] Running
	I1128 03:15:21.490340  356731 system_pods.go:61] "kube-proxy-jgxjs" [d8ea73b8-f8e1-4e14-b9cd-4da515a90b3d] Running
	I1128 03:15:21.490354  356731 system_pods.go:61] "kube-scheduler-multinode-112998" [b32dbcd4-76a8-4b87-b7d8-701f78a8285f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 03:15:21.490364  356731 system_pods.go:61] "storage-provisioner" [80d85aa0-5ee8-48db-a570-fdde6138e079] Running
	I1128 03:15:21.490375  356731 system_pods.go:74] duration metric: took 16.594997ms to wait for pod list to return data ...
	I1128 03:15:21.490414  356731 node_conditions.go:102] verifying NodePressure condition ...
	I1128 03:15:21.490499  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes
	I1128 03:15:21.490508  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:21.490520  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:21.490532  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:21.493485  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:21.493509  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:21.493519  356731 round_trippers.go:580]     Audit-Id: 3f5047ff-c613-4c24-9ec8-9b5a6f60f7d6
	I1128 03:15:21.493526  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:21.493533  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:21.493542  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:21.493550  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:21.493569  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:21 GMT
	I1128 03:15:21.493981  356731 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"861"},"items":[{"metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"760","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15370 chars]
	I1128 03:15:21.495126  356731 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 03:15:21.495163  356731 node_conditions.go:123] node cpu capacity is 2
	I1128 03:15:21.495211  356731 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 03:15:21.495220  356731 node_conditions.go:123] node cpu capacity is 2
	I1128 03:15:21.495229  356731 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 03:15:21.495237  356731 node_conditions.go:123] node cpu capacity is 2
	I1128 03:15:21.495245  356731 node_conditions.go:105] duration metric: took 4.8231ms to run NodePressure ...
	I1128 03:15:21.495279  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 03:15:21.775934  356731 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1128 03:15:21.775960  356731 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1128 03:15:21.775979  356731 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 03:15:21.776127  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1128 03:15:21.776140  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:21.776148  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:21.776154  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:21.779521  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:21.779545  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:21.779557  356731 round_trippers.go:580]     Audit-Id: 51a4c5a7-e2e2-4ea7-a35d-f95113586ae7
	I1128 03:15:21.779566  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:21.779577  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:21.779588  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:21.779596  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:21.779605  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:21 GMT
	I1128 03:15:21.780034  356731 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"863"},"items":[{"metadata":{"name":"etcd-multinode-112998","namespace":"kube-system","uid":"d09c5f66-0756-4402-ae0e-3b10c34e059c","resourceVersion":"809","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.73:2379","kubernetes.io/config.hash":"424bc6684b5cae600504832fd6cb287f","kubernetes.io/config.mirror":"424bc6684b5cae600504832fd6cb287f","kubernetes.io/config.seen":"2023-11-28T03:04:44.384307907Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations
":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:ku [truncated 28859 chars]
	I1128 03:15:21.781042  356731 kubeadm.go:787] kubelet initialised
	I1128 03:15:21.781063  356731 kubeadm.go:788] duration metric: took 5.073753ms waiting for restarted kubelet to initialise ...
	I1128 03:15:21.781073  356731 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 03:15:21.781144  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods
	I1128 03:15:21.781154  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:21.781165  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:21.781175  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:21.788414  356731 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1128 03:15:21.788438  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:21.788448  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:21.788456  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:21.788464  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:21.788472  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:21.788480  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:21 GMT
	I1128 03:15:21.788488  356731 round_trippers.go:580]     Audit-Id: 4a163a74-4d35-478c-af66-bf512f2c5ab0
	I1128 03:15:21.791790  356731 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"863"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"802","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83167 chars]
	I1128 03:15:21.794405  356731 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:21.794527  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:15:21.794539  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:21.794550  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:21.794559  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:21.796751  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:21.796771  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:21.796780  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:21.796789  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:21.796797  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:21.796805  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:21 GMT
	I1128 03:15:21.796816  356731 round_trippers.go:580]     Audit-Id: 2e8e5078-6f9c-4be8-872d-763c368b8a86
	I1128 03:15:21.796826  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:21.797486  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"802","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1128 03:15:21.797917  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:21.797930  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:21.797937  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:21.797947  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:21.806343  356731 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I1128 03:15:21.806367  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:21.806377  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:21 GMT
	I1128 03:15:21.806385  356731 round_trippers.go:580]     Audit-Id: 11abe398-22fc-4a5e-9959-ca008632b4d2
	I1128 03:15:21.806393  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:21.806406  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:21.806417  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:21.806426  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:21.806555  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"760","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1128 03:15:21.806955  356731 pod_ready.go:97] node "multinode-112998" hosting pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-112998" has status "Ready":"False"
	I1128 03:15:21.806983  356731 pod_ready.go:81] duration metric: took 12.552745ms waiting for pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace to be "Ready" ...
	E1128 03:15:21.806995  356731 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-112998" hosting pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-112998" has status "Ready":"False"
	I1128 03:15:21.807005  356731 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:21.807071  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-112998
	I1128 03:15:21.807083  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:21.807094  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:21.807104  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:21.810085  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:21.810109  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:21.810119  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:21.810129  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:21 GMT
	I1128 03:15:21.810145  356731 round_trippers.go:580]     Audit-Id: 07469865-09fe-4d63-8c74-b67a4d1b6c53
	I1128 03:15:21.810153  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:21.810165  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:21.810173  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:21.810344  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-112998","namespace":"kube-system","uid":"d09c5f66-0756-4402-ae0e-3b10c34e059c","resourceVersion":"809","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.73:2379","kubernetes.io/config.hash":"424bc6684b5cae600504832fd6cb287f","kubernetes.io/config.mirror":"424bc6684b5cae600504832fd6cb287f","kubernetes.io/config.seen":"2023-11-28T03:04:44.384307907Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6067 chars]
	I1128 03:15:21.810836  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:21.810858  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:21.810869  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:21.810879  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:21.812750  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:15:21.812765  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:21.812774  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:21.812782  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:21.812791  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:21.812810  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:21 GMT
	I1128 03:15:21.812825  356731 round_trippers.go:580]     Audit-Id: c082c9d3-7d42-4933-a931-6cec7836dc01
	I1128 03:15:21.812833  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:21.813030  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"760","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1128 03:15:21.813443  356731 pod_ready.go:97] node "multinode-112998" hosting pod "etcd-multinode-112998" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-112998" has status "Ready":"False"
	I1128 03:15:21.813477  356731 pod_ready.go:81] duration metric: took 6.463412ms waiting for pod "etcd-multinode-112998" in "kube-system" namespace to be "Ready" ...
	E1128 03:15:21.813488  356731 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-112998" hosting pod "etcd-multinode-112998" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-112998" has status "Ready":"False"
	I1128 03:15:21.813516  356731 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:21.813587  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:15:21.813598  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:21.813608  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:21.813618  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:21.815462  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:15:21.815476  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:21.815485  356731 round_trippers.go:580]     Audit-Id: 75b6e158-9963-40e7-b889-1ccb25f2b654
	I1128 03:15:21.815493  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:21.815508  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:21.815520  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:21.815535  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:21.815548  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:21 GMT
	I1128 03:15:21.815795  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"816","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1128 03:15:21.816329  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:21.816350  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:21.816360  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:21.816371  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:21.818124  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:15:21.818138  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:21.818147  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:21.818155  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:21.818163  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:21.818175  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:21 GMT
	I1128 03:15:21.818184  356731 round_trippers.go:580]     Audit-Id: fd996ad8-150c-43dd-b674-ba3dd005a3dc
	I1128 03:15:21.818194  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:21.818404  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"760","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1128 03:15:21.818702  356731 pod_ready.go:97] node "multinode-112998" hosting pod "kube-apiserver-multinode-112998" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-112998" has status "Ready":"False"
	I1128 03:15:21.818719  356731 pod_ready.go:81] duration metric: took 5.192344ms waiting for pod "kube-apiserver-multinode-112998" in "kube-system" namespace to be "Ready" ...
	E1128 03:15:21.818731  356731 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-112998" hosting pod "kube-apiserver-multinode-112998" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-112998" has status "Ready":"False"
	I1128 03:15:21.818739  356731 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:21.818809  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-112998
	I1128 03:15:21.818821  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:21.818831  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:21.818844  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:21.820656  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:15:21.820675  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:21.820684  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:21.820693  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:21 GMT
	I1128 03:15:21.820708  356731 round_trippers.go:580]     Audit-Id: fa3a038d-89d0-44b4-acc6-faacc61e6e46
	I1128 03:15:21.820716  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:21.820725  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:21.820736  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:21.820965  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-112998","namespace":"kube-system","uid":"9c108920-a3e5-4377-96a3-97a4538555a0","resourceVersion":"821","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8aad7d6fb2125381c02e5fd8434005a3","kubernetes.io/config.mirror":"8aad7d6fb2125381c02e5fd8434005a3","kubernetes.io/config.seen":"2023-11-28T03:04:44.384314206Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7212 chars]
	I1128 03:15:21.874695  356731 request.go:629] Waited for 53.230076ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:21.874785  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:21.874798  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:21.874814  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:21.874823  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:21.877767  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:21.877789  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:21.877796  356731 round_trippers.go:580]     Audit-Id: 1e6b6c14-6728-4a2d-adb5-cf6201138df1
	I1128 03:15:21.877802  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:21.877807  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:21.877812  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:21.877820  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:21.877825  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:21 GMT
	I1128 03:15:21.877991  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"760","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1128 03:15:21.878454  356731 pod_ready.go:97] node "multinode-112998" hosting pod "kube-controller-manager-multinode-112998" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-112998" has status "Ready":"False"
	I1128 03:15:21.878488  356731 pod_ready.go:81] duration metric: took 59.735442ms waiting for pod "kube-controller-manager-multinode-112998" in "kube-system" namespace to be "Ready" ...
	E1128 03:15:21.878505  356731 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-112998" hosting pod "kube-controller-manager-multinode-112998" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-112998" has status "Ready":"False"
	I1128 03:15:21.878521  356731 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bm5x4" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:22.074993  356731 request.go:629] Waited for 196.371327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bm5x4
	I1128 03:15:22.075091  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bm5x4
	I1128 03:15:22.075103  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:22.075119  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:22.075132  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:22.077946  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:22.077972  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:22.077981  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:22.077990  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:22.078003  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:22.078011  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:22 GMT
	I1128 03:15:22.078022  356731 round_trippers.go:580]     Audit-Id: d0db25bd-fa7f-45ce-9aa5-0a7ec1016566
	I1128 03:15:22.078029  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:22.078174  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bm5x4","generateName":"kube-proxy-","namespace":"kube-system","uid":"c478a3ff-3c8e-4f10-88c1-2b6f62b1699d","resourceVersion":"730","creationTimestamp":"2023-11-28T03:06:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:06:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1128 03:15:22.273940  356731 request.go:629] Waited for 195.320397ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m03
	I1128 03:15:22.274063  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m03
	I1128 03:15:22.274074  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:22.274086  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:22.274097  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:22.276633  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:22.276664  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:22.276674  356731 round_trippers.go:580]     Audit-Id: 378505d6-28ef-476b-addd-f57bbc4d13c3
	I1128 03:15:22.276683  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:22.276691  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:22.276700  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:22.276712  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:22.276729  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:22 GMT
	I1128 03:15:22.276926  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m03","uid":"471d28bb-efb4-436f-9b13-4d96112b9f87","resourceVersion":"757","creationTimestamp":"2023-11-28T03:07:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:07:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3533 chars]
	I1128 03:15:22.277218  356731 pod_ready.go:92] pod "kube-proxy-bm5x4" in "kube-system" namespace has status "Ready":"True"
	I1128 03:15:22.277240  356731 pod_ready.go:81] duration metric: took 398.709586ms waiting for pod "kube-proxy-bm5x4" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:22.277251  356731 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-bmr6b" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:22.474729  356731 request.go:629] Waited for 197.411628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmr6b
	I1128 03:15:22.474846  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmr6b
	I1128 03:15:22.474860  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:22.474872  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:22.474883  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:22.477416  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:22.477442  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:22.477449  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:22.477455  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:22 GMT
	I1128 03:15:22.477460  356731 round_trippers.go:580]     Audit-Id: 34a9d184-4684-4afb-bc72-3b60f41e5b2d
	I1128 03:15:22.477465  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:22.477470  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:22.477476  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:22.477699  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bmr6b","generateName":"kube-proxy-","namespace":"kube-system","uid":"0d9b86f2-025d-424d-a66f-ad3255685aca","resourceVersion":"860","creationTimestamp":"2023-11-28T03:04:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1128 03:15:22.674669  356731 request.go:629] Waited for 196.363206ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:22.674742  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:22.674747  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:22.674755  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:22.674761  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:22.677678  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:22.677708  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:22.677718  356731 round_trippers.go:580]     Audit-Id: 04343820-9479-4350-8132-22cfd314ce84
	I1128 03:15:22.677727  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:22.677735  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:22.677743  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:22.677751  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:22.677768  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:22 GMT
	I1128 03:15:22.677932  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"760","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1128 03:15:22.678400  356731 pod_ready.go:97] node "multinode-112998" hosting pod "kube-proxy-bmr6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-112998" has status "Ready":"False"
	I1128 03:15:22.678425  356731 pod_ready.go:81] duration metric: took 401.166385ms waiting for pod "kube-proxy-bmr6b" in "kube-system" namespace to be "Ready" ...
	E1128 03:15:22.678434  356731 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-112998" hosting pod "kube-proxy-bmr6b" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-112998" has status "Ready":"False"
	I1128 03:15:22.678445  356731 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-jgxjs" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:22.874880  356731 request.go:629] Waited for 196.358717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgxjs
	I1128 03:15:22.874951  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgxjs
	I1128 03:15:22.874958  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:22.874968  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:22.874978  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:22.878427  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:22.878458  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:22.878468  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:22.878476  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:22.878483  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:22.878491  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:22.878499  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:22 GMT
	I1128 03:15:22.878507  356731 round_trippers.go:580]     Audit-Id: fcac190e-cfcd-4b3d-89bd-564fb74a9b7a
	I1128 03:15:22.878751  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jgxjs","generateName":"kube-proxy-","namespace":"kube-system","uid":"d8ea73b8-f8e1-4e14-b9cd-4da515a90b3d","resourceVersion":"521","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1128 03:15:23.074718  356731 request.go:629] Waited for 195.383493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:15:23.074800  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:15:23.074827  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:23.074840  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:23.074860  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:23.077333  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:23.077361  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:23.077371  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:23.077378  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:23.077386  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:23 GMT
	I1128 03:15:23.077394  356731 round_trippers.go:580]     Audit-Id: d06d567f-5289-425a-9689-35d1f73e802f
	I1128 03:15:23.077408  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:23.077421  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:23.077551  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"753","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3683 chars]
	I1128 03:15:23.077865  356731 pod_ready.go:92] pod "kube-proxy-jgxjs" in "kube-system" namespace has status "Ready":"True"
	I1128 03:15:23.077885  356731 pod_ready.go:81] duration metric: took 399.434417ms waiting for pod "kube-proxy-jgxjs" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:23.077899  356731 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:23.274356  356731 request.go:629] Waited for 196.374893ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-112998
	I1128 03:15:23.274440  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-112998
	I1128 03:15:23.274453  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:23.274483  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:23.274501  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:23.277277  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:23.277304  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:23.277315  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:23.277323  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:23 GMT
	I1128 03:15:23.277331  356731 round_trippers.go:580]     Audit-Id: b3e4ff47-7676-4931-b127-d0796384e873
	I1128 03:15:23.277339  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:23.277347  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:23.277359  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:23.277496  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-112998","namespace":"kube-system","uid":"b32dbcd4-76a8-4b87-b7d8-701f78a8285f","resourceVersion":"812","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"49372038efccb5b42d91203468562dfb","kubernetes.io/config.mirror":"49372038efccb5b42d91203468562dfb","kubernetes.io/config.seen":"2023-11-28T03:04:44.384315431Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4924 chars]
	I1128 03:15:23.474380  356731 request.go:629] Waited for 196.358558ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:23.474448  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:23.474455  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:23.474466  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:23.474475  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:23.477516  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:23.477538  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:23.477547  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:23.477553  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:23.477558  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:23 GMT
	I1128 03:15:23.477563  356731 round_trippers.go:580]     Audit-Id: 1f1de9bd-4a7b-46de-84d8-6f512a03b73b
	I1128 03:15:23.477568  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:23.477583  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:23.477732  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"760","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1128 03:15:23.478174  356731 pod_ready.go:97] node "multinode-112998" hosting pod "kube-scheduler-multinode-112998" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-112998" has status "Ready":"False"
	I1128 03:15:23.478199  356731 pod_ready.go:81] duration metric: took 400.286367ms waiting for pod "kube-scheduler-multinode-112998" in "kube-system" namespace to be "Ready" ...
	E1128 03:15:23.478208  356731 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-112998" hosting pod "kube-scheduler-multinode-112998" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-112998" has status "Ready":"False"
	I1128 03:15:23.478222  356731 pod_ready.go:38] duration metric: took 1.697138891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 03:15:23.478242  356731 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 03:15:23.491296  356731 command_runner.go:130] > -16
	I1128 03:15:23.491368  356731 ops.go:34] apiserver oom_adj: -16
	I1128 03:15:23.491386  356731 kubeadm.go:640] restartCluster took 23.030201079s
	I1128 03:15:23.491393  356731 kubeadm.go:406] StartCluster complete in 23.075230623s
	I1128 03:15:23.491415  356731 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:15:23.491492  356731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:15:23.492473  356731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:15:23.492799  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 03:15:23.492896  356731 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 03:15:23.496004  356731 out.go:177] * Enabled addons: 
	I1128 03:15:23.493101  356731 config.go:182] Loaded profile config "multinode-112998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:15:23.497523  356731 addons.go:502] enable addons completed in 4.627331ms: enabled=[]
	I1128 03:15:23.493201  356731 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:15:23.497785  356731 kapi.go:59] client config for multinode-112998: &rest.Config{Host:"https://192.168.39.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 03:15:23.498196  356731 round_trippers.go:463] GET https://192.168.39.73:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1128 03:15:23.498211  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:23.498218  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:23.498225  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:23.501437  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:23.501471  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:23.501482  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:23.501490  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:23.501502  356731 round_trippers.go:580]     Content-Length: 291
	I1128 03:15:23.501508  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:23 GMT
	I1128 03:15:23.501515  356731 round_trippers.go:580]     Audit-Id: d2607f22-f364-49c2-a71b-c26c51ae7777
	I1128 03:15:23.501520  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:23.501527  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:23.501590  356731 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"722e10cd-af13-449a-984b-faf3aaa4e33e","resourceVersion":"862","creationTimestamp":"2023-11-28T03:04:44Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1128 03:15:23.501773  356731 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-112998" context rescaled to 1 replicas
	I1128 03:15:23.501816  356731 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 03:15:23.503653  356731 out.go:177] * Verifying Kubernetes components...
	I1128 03:15:23.505254  356731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 03:15:23.609242  356731 command_runner.go:130] > apiVersion: v1
	I1128 03:15:23.609267  356731 command_runner.go:130] > data:
	I1128 03:15:23.609271  356731 command_runner.go:130] >   Corefile: |
	I1128 03:15:23.609275  356731 command_runner.go:130] >     .:53 {
	I1128 03:15:23.609282  356731 command_runner.go:130] >         log
	I1128 03:15:23.609287  356731 command_runner.go:130] >         errors
	I1128 03:15:23.609291  356731 command_runner.go:130] >         health {
	I1128 03:15:23.609298  356731 command_runner.go:130] >            lameduck 5s
	I1128 03:15:23.609304  356731 command_runner.go:130] >         }
	I1128 03:15:23.609312  356731 command_runner.go:130] >         ready
	I1128 03:15:23.609322  356731 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1128 03:15:23.609332  356731 command_runner.go:130] >            pods insecure
	I1128 03:15:23.609343  356731 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1128 03:15:23.609349  356731 command_runner.go:130] >            ttl 30
	I1128 03:15:23.609353  356731 command_runner.go:130] >         }
	I1128 03:15:23.609360  356731 command_runner.go:130] >         prometheus :9153
	I1128 03:15:23.609364  356731 command_runner.go:130] >         hosts {
	I1128 03:15:23.609371  356731 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I1128 03:15:23.609375  356731 command_runner.go:130] >            fallthrough
	I1128 03:15:23.609384  356731 command_runner.go:130] >         }
	I1128 03:15:23.609392  356731 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1128 03:15:23.609404  356731 command_runner.go:130] >            max_concurrent 1000
	I1128 03:15:23.609417  356731 command_runner.go:130] >         }
	I1128 03:15:23.609427  356731 command_runner.go:130] >         cache 30
	I1128 03:15:23.609443  356731 command_runner.go:130] >         loop
	I1128 03:15:23.609450  356731 command_runner.go:130] >         reload
	I1128 03:15:23.609455  356731 command_runner.go:130] >         loadbalance
	I1128 03:15:23.609458  356731 command_runner.go:130] >     }
	I1128 03:15:23.609462  356731 command_runner.go:130] > kind: ConfigMap
	I1128 03:15:23.609470  356731 command_runner.go:130] > metadata:
	I1128 03:15:23.609478  356731 command_runner.go:130] >   creationTimestamp: "2023-11-28T03:04:44Z"
	I1128 03:15:23.609485  356731 command_runner.go:130] >   name: coredns
	I1128 03:15:23.609493  356731 command_runner.go:130] >   namespace: kube-system
	I1128 03:15:23.609503  356731 command_runner.go:130] >   resourceVersion: "399"
	I1128 03:15:23.609511  356731 command_runner.go:130] >   uid: 495740b6-25c3-48ab-96b3-4d2ad854ec0c
	I1128 03:15:23.609626  356731 node_ready.go:35] waiting up to 6m0s for node "multinode-112998" to be "Ready" ...
	I1128 03:15:23.609670  356731 start.go:899] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1128 03:15:23.674971  356731 request.go:629] Waited for 65.224645ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:23.675044  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:23.675051  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:23.675059  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:23.675065  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:23.678296  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:23.678324  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:23.678335  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:23 GMT
	I1128 03:15:23.678344  356731 round_trippers.go:580]     Audit-Id: 5ad803d1-8da3-4961-9315-51bc859b9de4
	I1128 03:15:23.678352  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:23.678359  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:23.678367  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:23.678375  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:23.678833  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"760","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1128 03:15:23.874766  356731 request.go:629] Waited for 195.431296ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:23.874836  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:23.874842  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:23.874853  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:23.874906  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:23.877691  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:23.877719  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:23.877729  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:23.877736  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:23 GMT
	I1128 03:15:23.877744  356731 round_trippers.go:580]     Audit-Id: 5c0e47cf-26bb-4273-9cb6-5e3699b6e68c
	I1128 03:15:23.877751  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:23.877759  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:23.877766  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:23.878468  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"760","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1128 03:15:24.379724  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:24.379752  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:24.379764  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:24.379771  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:24.382653  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:24.382675  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:24.382682  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:24 GMT
	I1128 03:15:24.382699  356731 round_trippers.go:580]     Audit-Id: efeb1c62-8d17-4865-b65d-0d43b6755981
	I1128 03:15:24.382708  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:24.382719  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:24.382730  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:24.382738  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:24.383458  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"760","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6117 chars]
	I1128 03:15:24.879741  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:24.879777  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:24.879788  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:24.879796  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:24.882558  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:24.882587  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:24.882598  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:24.882608  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:24.882615  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:24.882623  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:24.882629  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:24 GMT
	I1128 03:15:24.882636  356731 round_trippers.go:580]     Audit-Id: 2f1eee54-ac76-4a8d-89af-2c9c7a4ab40c
	I1128 03:15:24.883089  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:24.883533  356731 node_ready.go:49] node "multinode-112998" has status "Ready":"True"
	I1128 03:15:24.883556  356731 node_ready.go:38] duration metric: took 1.273896501s waiting for node "multinode-112998" to be "Ready" ...
	I1128 03:15:24.883568  356731 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 03:15:24.883643  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods
	I1128 03:15:24.883655  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:24.883665  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:24.883677  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:24.887162  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:24.887185  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:24.887194  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:24.887202  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:24.887212  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:24.887224  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:24 GMT
	I1128 03:15:24.887247  356731 round_trippers.go:580]     Audit-Id: 4b1e7a39-cf25-42f3-9a1c-d5a16ef82f6f
	I1128 03:15:24.887263  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:24.889535  356731 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"872"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"802","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82917 chars]
	I1128 03:15:24.892106  356731 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:24.892193  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:15:24.892204  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:24.892215  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:24.892226  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:24.894561  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:24.894581  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:24.894595  356731 round_trippers.go:580]     Audit-Id: bdef16ea-e089-4c28-9b8b-602595f82c52
	I1128 03:15:24.894604  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:24.894611  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:24.894623  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:24.894637  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:24.894644  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:24 GMT
	I1128 03:15:24.894822  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"802","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1128 03:15:24.895355  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:24.895371  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:24.895381  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:24.895391  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:24.897401  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:15:24.897418  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:24.897432  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:24.897442  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:24.897450  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:24.897459  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:24.897469  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:24 GMT
	I1128 03:15:24.897479  356731 round_trippers.go:580]     Audit-Id: 83590c3b-fce4-472d-9663-75f963ef08ac
	I1128 03:15:24.897836  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:25.074582  356731 request.go:629] Waited for 176.387857ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:15:25.074687  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:15:25.074695  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:25.074705  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:25.074714  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:25.078554  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:25.078585  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:25.078597  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:25.078605  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:25.078613  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:25.078621  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:25.078632  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:25 GMT
	I1128 03:15:25.078642  356731 round_trippers.go:580]     Audit-Id: d6429f76-49ac-43cc-87a8-8a6b0ea34b2b
	I1128 03:15:25.079054  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"802","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1128 03:15:25.274007  356731 request.go:629] Waited for 194.327043ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:25.274107  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:25.274114  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:25.274125  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:25.274135  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:25.276807  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:25.276834  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:25.276845  356731 round_trippers.go:580]     Audit-Id: edce147c-39ee-4861-8a6c-66fcd8ef98c5
	I1128 03:15:25.276870  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:25.276894  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:25.276903  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:25.276916  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:25.276927  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:25 GMT
	I1128 03:15:25.277168  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:25.778350  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:15:25.778387  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:25.778400  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:25.778410  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:25.781250  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:25.781279  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:25.781290  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:25.781319  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:25.781327  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:25.781335  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:25.781348  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:25 GMT
	I1128 03:15:25.781356  356731 round_trippers.go:580]     Audit-Id: 9768a0b8-9dbb-4b95-8187-1562b3c0450a
	I1128 03:15:25.781909  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"802","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1128 03:15:25.782482  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:25.782506  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:25.782521  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:25.782532  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:25.784830  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:25.784849  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:25.784858  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:25.784867  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:25.784875  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:25.784894  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:25.784902  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:25 GMT
	I1128 03:15:25.784911  356731 round_trippers.go:580]     Audit-Id: 2d19fbbe-e397-4909-b7de-838641e42407
	I1128 03:15:25.785112  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:26.277762  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:15:26.277789  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:26.277798  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:26.277804  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:26.281166  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:26.281189  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:26.281196  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:26.281211  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:26 GMT
	I1128 03:15:26.281217  356731 round_trippers.go:580]     Audit-Id: 8db32504-9f1f-4b1c-96d1-77028a7470d7
	I1128 03:15:26.281222  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:26.281227  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:26.281232  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:26.282522  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"802","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1128 03:15:26.283001  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:26.283016  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:26.283023  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:26.283029  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:26.287433  356731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:15:26.287448  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:26.287455  356731 round_trippers.go:580]     Audit-Id: 3e8248cc-c165-42c8-85fb-2dd5acaa8519
	I1128 03:15:26.287460  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:26.287466  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:26.287471  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:26.287478  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:26.287486  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:26 GMT
	I1128 03:15:26.287894  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:26.777876  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:15:26.777902  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:26.777911  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:26.777917  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:26.781154  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:26.781181  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:26.781188  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:26.781193  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:26 GMT
	I1128 03:15:26.781199  356731 round_trippers.go:580]     Audit-Id: c63c82d2-fb95-405a-b173-4f262e010db1
	I1128 03:15:26.781204  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:26.781209  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:26.781217  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:26.781424  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"802","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1128 03:15:26.782067  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:26.782085  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:26.782096  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:26.782102  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:26.784730  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:26.784745  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:26.784751  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:26.784756  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:26.784761  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:26.784766  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:26.784771  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:26 GMT
	I1128 03:15:26.784776  356731 round_trippers.go:580]     Audit-Id: 144cd338-d78d-4bb7-a84d-395dda54e982
	I1128 03:15:26.785250  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:27.277999  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:15:27.278044  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:27.278058  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:27.278068  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:27.281173  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:27.281195  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:27.281202  356731 round_trippers.go:580]     Audit-Id: 231a0140-9f30-4b04-9898-181ff5b99279
	I1128 03:15:27.281228  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:27.281241  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:27.281249  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:27.281260  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:27.281269  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:27 GMT
	I1128 03:15:27.281539  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"802","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1128 03:15:27.282189  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:27.282210  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:27.282221  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:27.282231  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:27.285617  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:27.285632  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:27.285640  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:27 GMT
	I1128 03:15:27.285648  356731 round_trippers.go:580]     Audit-Id: 5585b0e9-a8cb-4df8-8e87-29c5cf009831
	I1128 03:15:27.285657  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:27.285668  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:27.285691  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:27.285702  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:27.286264  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:27.286553  356731 pod_ready.go:102] pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace has status "Ready":"False"
	I1128 03:15:27.777888  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:15:27.777910  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:27.777918  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:27.777924  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:27.782916  356731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:15:27.782939  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:27.782946  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:27.782952  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:27.782957  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:27 GMT
	I1128 03:15:27.782964  356731 round_trippers.go:580]     Audit-Id: b12600df-c662-47a0-9497-1a47f52d6dbb
	I1128 03:15:27.782973  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:27.782980  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:27.783246  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"802","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6369 chars]
	I1128 03:15:27.783835  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:27.783854  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:27.783865  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:27.783874  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:27.786058  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:27.786074  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:27.786080  356731 round_trippers.go:580]     Audit-Id: ce877666-cedc-43dd-986b-57835a663be6
	I1128 03:15:27.786085  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:27.786100  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:27.786105  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:27.786111  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:27.786116  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:27 GMT
	I1128 03:15:27.786290  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:28.277949  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:15:28.277986  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:28.277999  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:28.278009  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:28.283836  356731 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1128 03:15:28.283866  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:28.283876  356731 round_trippers.go:580]     Audit-Id: af206ca1-6985-433c-9284-67b2edda4935
	I1128 03:15:28.283885  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:28.283894  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:28.283904  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:28.283912  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:28.283926  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:28 GMT
	I1128 03:15:28.284357  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"880","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6493 chars]
	I1128 03:15:28.285009  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:28.285034  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:28.285044  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:28.285053  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:28.288324  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:28.288348  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:28.288358  356731 round_trippers.go:580]     Audit-Id: bd82acc7-e6ac-48c8-bad7-de831ae142de
	I1128 03:15:28.288366  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:28.288374  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:28.288383  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:28.288394  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:28.288402  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:28 GMT
	I1128 03:15:28.288535  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:28.778077  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:15:28.778106  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:28.778118  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:28.778127  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:28.780792  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:28.780819  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:28.780830  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:28 GMT
	I1128 03:15:28.780838  356731 round_trippers.go:580]     Audit-Id: f78dbe7d-55f6-486f-b5fa-65ff6879fef9
	I1128 03:15:28.780847  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:28.780870  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:28.780891  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:28.780901  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:28.781256  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"881","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1128 03:15:28.781737  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:28.781753  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:28.781763  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:28.781771  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:28.784586  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:28.784608  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:28.784618  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:28 GMT
	I1128 03:15:28.784626  356731 round_trippers.go:580]     Audit-Id: c0efaf0d-7d3e-454a-a087-c1e415928f28
	I1128 03:15:28.784635  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:28.784651  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:28.784659  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:28.784668  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:28.784814  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:28.785191  356731 pod_ready.go:92] pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace has status "Ready":"True"
	I1128 03:15:28.785214  356731 pod_ready.go:81] duration metric: took 3.893085094s waiting for pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:28.785224  356731 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:28.785291  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-112998
	I1128 03:15:28.785301  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:28.785308  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:28.785314  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:28.787319  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:15:28.787342  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:28.787363  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:28.787377  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:28 GMT
	I1128 03:15:28.787382  356731 round_trippers.go:580]     Audit-Id: c86cbc02-e3c2-48c1-bf89-fa30ac9ed064
	I1128 03:15:28.787388  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:28.787394  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:28.787399  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:28.787549  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-112998","namespace":"kube-system","uid":"d09c5f66-0756-4402-ae0e-3b10c34e059c","resourceVersion":"874","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.73:2379","kubernetes.io/config.hash":"424bc6684b5cae600504832fd6cb287f","kubernetes.io/config.mirror":"424bc6684b5cae600504832fd6cb287f","kubernetes.io/config.seen":"2023-11-28T03:04:44.384307907Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1128 03:15:28.787906  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:28.787921  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:28.787931  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:28.787939  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:28.790650  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:28.790665  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:28.790671  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:28.790677  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:28 GMT
	I1128 03:15:28.790682  356731 round_trippers.go:580]     Audit-Id: 374d2852-c20d-488c-99dd-44867e03a1ff
	I1128 03:15:28.790687  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:28.790695  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:28.790703  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:28.790786  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:28.791071  356731 pod_ready.go:92] pod "etcd-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:15:28.791086  356731 pod_ready.go:81] duration metric: took 5.853435ms waiting for pod "etcd-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:28.791104  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:28.791163  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:15:28.791171  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:28.791178  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:28.791186  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:28.793240  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:28.793257  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:28.793266  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:28.793274  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:28.793281  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:28.793288  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:28.793297  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:28 GMT
	I1128 03:15:28.793306  356731 round_trippers.go:580]     Audit-Id: 3c91f312-f33f-48ae-984a-93e03e294444
	I1128 03:15:28.793589  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"816","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1128 03:15:28.874282  356731 request.go:629] Waited for 80.253859ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:28.874376  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:28.874384  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:28.874397  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:28.874412  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:28.877218  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:28.877239  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:28.877246  356731 round_trippers.go:580]     Audit-Id: 188bd58c-769f-4877-9d22-db7e23672833
	I1128 03:15:28.877251  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:28.877256  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:28.877261  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:28.877269  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:28.877278  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:28 GMT
	I1128 03:15:28.878078  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:29.074953  356731 request.go:629] Waited for 196.399833ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:15:29.075050  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:15:29.075062  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:29.075075  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:29.075096  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:29.077777  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:29.077794  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:29.077803  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:29.077812  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:29.077820  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:29.077833  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:29.077845  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:29 GMT
	I1128 03:15:29.077858  356731 round_trippers.go:580]     Audit-Id: 0ef891e5-8e30-4c35-860b-1e17756b7cf4
	I1128 03:15:29.078106  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"816","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1128 03:15:29.275036  356731 request.go:629] Waited for 196.453075ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:29.275142  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:29.275155  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:29.275166  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:29.275178  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:29.279595  356731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:15:29.279622  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:29.279632  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:29.279639  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:29.279646  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:29.279653  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:29 GMT
	I1128 03:15:29.279662  356731 round_trippers.go:580]     Audit-Id: d503714a-4138-4ee5-8b50-a373dce3e4e5
	I1128 03:15:29.279672  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:29.279805  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:29.780995  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:15:29.781019  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:29.781030  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:29.781039  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:29.785047  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:29.785071  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:29.785081  356731 round_trippers.go:580]     Audit-Id: 2712633b-3b3f-40c8-8223-71293bceb919
	I1128 03:15:29.785091  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:29.785099  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:29.785108  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:29.785118  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:29.785123  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:29 GMT
	I1128 03:15:29.785374  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"816","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1128 03:15:29.785820  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:29.785836  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:29.785847  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:29.785855  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:29.788067  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:29.788096  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:29.788103  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:29.788108  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:29 GMT
	I1128 03:15:29.788113  356731 round_trippers.go:580]     Audit-Id: 15b1ef56-a2bb-49dd-8971-ca04bd70f2e6
	I1128 03:15:29.788118  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:29.788126  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:29.788131  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:29.788294  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:30.281008  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:15:30.281035  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:30.281044  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:30.281050  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:30.283695  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:30.283718  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:30.283728  356731 round_trippers.go:580]     Audit-Id: 962d5cd2-9a0d-4fea-a5db-02395b312dc2
	I1128 03:15:30.283736  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:30.283743  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:30.283763  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:30.283786  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:30.283799  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:30 GMT
	I1128 03:15:30.284559  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"816","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1128 03:15:30.285022  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:30.285038  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:30.285048  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:30.285063  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:30.288382  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:30.288399  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:30.288406  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:30.288411  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:30 GMT
	I1128 03:15:30.288416  356731 round_trippers.go:580]     Audit-Id: bced63f2-5cf2-47f5-abfe-b1194e2608ec
	I1128 03:15:30.288421  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:30.288426  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:30.288439  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:30.288566  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:30.780347  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:15:30.780371  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:30.780383  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:30.780390  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:30.783189  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:30.783208  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:30.783215  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:30.783221  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:30.783226  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:30 GMT
	I1128 03:15:30.783231  356731 round_trippers.go:580]     Audit-Id: 83f65518-cc21-47ac-afdd-1033fa4ff860
	I1128 03:15:30.783236  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:30.783243  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:30.783555  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"816","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1128 03:15:30.784175  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:30.784195  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:30.784206  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:30.784216  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:30.786830  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:30.786845  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:30.786851  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:30.786868  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:30.786883  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:30.786891  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:30 GMT
	I1128 03:15:30.786899  356731 round_trippers.go:580]     Audit-Id: 76221d13-ad9d-4f9e-a263-afae99926a7e
	I1128 03:15:30.786907  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:30.787942  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:31.280613  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:15:31.280644  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:31.280653  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:31.280659  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:31.284661  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:31.284686  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:31.284692  356731 round_trippers.go:580]     Audit-Id: eef4b439-665c-462a-9460-cb644b9153d9
	I1128 03:15:31.284698  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:31.284703  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:31.284708  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:31.284713  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:31.284718  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:31 GMT
	I1128 03:15:31.285468  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"816","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1128 03:15:31.285895  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:31.285906  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:31.285914  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:31.285920  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:31.288859  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:31.288875  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:31.288898  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:31.288915  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:31.288926  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:31.288934  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:31 GMT
	I1128 03:15:31.288946  356731 round_trippers.go:580]     Audit-Id: 95c35141-763e-4375-a2a4-bc5a272d3823
	I1128 03:15:31.288955  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:31.289620  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:31.289908  356731 pod_ready.go:102] pod "kube-apiserver-multinode-112998" in "kube-system" namespace has status "Ready":"False"
	I1128 03:15:31.781136  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:15:31.781158  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:31.781166  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:31.781173  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:31.786414  356731 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1128 03:15:31.786442  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:31.786452  356731 round_trippers.go:580]     Audit-Id: ae6a7599-1889-4a06-acbe-23d82689295f
	I1128 03:15:31.786465  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:31.786475  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:31.786484  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:31.786491  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:31.786512  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:31 GMT
	I1128 03:15:31.786721  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"816","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1128 03:15:31.787146  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:31.787158  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:31.787165  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:31.787171  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:31.789295  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:31.789315  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:31.789323  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:31.789331  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:31.789347  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:31 GMT
	I1128 03:15:31.789355  356731 round_trippers.go:580]     Audit-Id: ec7375bc-24f2-4bc2-a9a6-e5c7edb3f835
	I1128 03:15:31.789366  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:31.789377  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:31.790006  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:32.280665  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:15:32.280695  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:32.280704  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:32.280710  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:32.283690  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:32.283714  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:32.283721  356731 round_trippers.go:580]     Audit-Id: f7a87ee7-f102-4cdd-a174-c470ba0faacb
	I1128 03:15:32.283727  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:32.283732  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:32.283741  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:32.283750  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:32.283755  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:32 GMT
	I1128 03:15:32.283948  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"816","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1128 03:15:32.284398  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:32.284411  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:32.284426  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:32.284432  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:32.286558  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:32.286580  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:32.286590  356731 round_trippers.go:580]     Audit-Id: 893150e4-50e2-4af1-bf5d-f61f4b3e3603
	I1128 03:15:32.286598  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:32.286611  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:32.286620  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:32.286636  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:32.286648  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:32 GMT
	I1128 03:15:32.286872  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:32.780369  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:15:32.780410  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:32.780419  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:32.780424  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:32.783388  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:32.783415  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:32.783426  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:32.783434  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:32.783442  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:32 GMT
	I1128 03:15:32.783449  356731 round_trippers.go:580]     Audit-Id: bbd5d49b-1c27-4b40-87bd-d312f28247a0
	I1128 03:15:32.783455  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:32.783462  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:32.784090  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"816","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1128 03:15:32.784536  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:32.784554  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:32.784564  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:32.784582  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:32.786975  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:32.786989  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:32.786999  356731 round_trippers.go:580]     Audit-Id: c782422a-4981-4233-a808-22dd2fc49fa5
	I1128 03:15:32.787008  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:32.787035  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:32.787048  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:32.787058  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:32.787068  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:32 GMT
	I1128 03:15:32.787263  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:33.280992  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:15:33.281023  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:33.281037  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:33.281045  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:33.285654  356731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:15:33.285684  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:33.285694  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:33.285700  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:33 GMT
	I1128 03:15:33.285705  356731 round_trippers.go:580]     Audit-Id: f1f234fe-7a7c-4c11-90f1-20ed18c7fa2e
	I1128 03:15:33.285712  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:33.285721  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:33.285728  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:33.286569  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"816","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1128 03:15:33.287030  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:33.287044  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:33.287051  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:33.287057  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:33.289501  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:33.289525  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:33.289534  356731 round_trippers.go:580]     Audit-Id: 6bfd740a-5a0a-4588-8995-e6623e272688
	I1128 03:15:33.289542  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:33.289549  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:33.289558  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:33.289567  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:33.289590  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:33 GMT
	I1128 03:15:33.290280  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:33.290595  356731 pod_ready.go:102] pod "kube-apiserver-multinode-112998" in "kube-system" namespace has status "Ready":"False"
	I1128 03:15:33.781096  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:15:33.781143  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:33.781152  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:33.781158  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:33.784625  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:33.784687  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:33.784716  356731 round_trippers.go:580]     Audit-Id: 336346b0-bdfb-4fa2-9752-22ebf672c7a1
	I1128 03:15:33.784726  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:33.784735  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:33.784744  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:33.784758  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:33.784767  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:33 GMT
	I1128 03:15:33.785269  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"816","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7624 chars]
	I1128 03:15:33.785720  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:33.785736  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:33.785746  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:33.785755  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:33.788339  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:33.788354  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:33.788360  356731 round_trippers.go:580]     Audit-Id: 785663f1-74da-4e32-8ab1-05a14d73b5ae
	I1128 03:15:33.788366  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:33.788370  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:33.788376  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:33.788380  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:33.788386  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:33 GMT
	I1128 03:15:33.788719  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:34.280457  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:15:34.280490  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:34.280501  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:34.280510  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:34.283579  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:34.283610  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:34.283621  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:34.283630  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:34 GMT
	I1128 03:15:34.283637  356731 round_trippers.go:580]     Audit-Id: e168e25d-95c0-4698-b311-bd90f02afae8
	I1128 03:15:34.283651  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:34.283658  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:34.283667  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:34.284373  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"901","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1128 03:15:34.284985  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:34.285005  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:34.285015  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:34.285024  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:34.289554  356731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:15:34.289579  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:34.289590  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:34.289598  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:34 GMT
	I1128 03:15:34.289610  356731 round_trippers.go:580]     Audit-Id: c34f7731-55e1-4d08-954b-3028d4be4caf
	I1128 03:15:34.289621  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:34.289633  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:34.289644  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:34.290113  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:34.290434  356731 pod_ready.go:92] pod "kube-apiserver-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:15:34.290456  356731 pod_ready.go:81] duration metric: took 5.499340089s waiting for pod "kube-apiserver-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:34.290470  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:34.290530  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-112998
	I1128 03:15:34.290538  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:34.290545  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:34.290554  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:34.292708  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:34.292728  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:34.292737  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:34.292746  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:34.292754  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:34 GMT
	I1128 03:15:34.292761  356731 round_trippers.go:580]     Audit-Id: c9f09876-6cf6-4977-8e71-799f82628c6d
	I1128 03:15:34.292766  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:34.292772  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:34.293190  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-112998","namespace":"kube-system","uid":"9c108920-a3e5-4377-96a3-97a4538555a0","resourceVersion":"883","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8aad7d6fb2125381c02e5fd8434005a3","kubernetes.io/config.mirror":"8aad7d6fb2125381c02e5fd8434005a3","kubernetes.io/config.seen":"2023-11-28T03:04:44.384314206Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1128 03:15:34.293578  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:34.293591  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:34.293598  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:34.293606  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:34.295487  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:15:34.295509  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:34.295518  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:34.295527  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:34.295536  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:34.295549  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:34.295561  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:34 GMT
	I1128 03:15:34.295573  356731 round_trippers.go:580]     Audit-Id: 877add76-94cd-4f86-be35-01d0e1017a6f
	I1128 03:15:34.295777  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:34.296123  356731 pod_ready.go:92] pod "kube-controller-manager-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:15:34.296141  356731 pod_ready.go:81] duration metric: took 5.664306ms waiting for pod "kube-controller-manager-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:34.296152  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bm5x4" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:34.296215  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bm5x4
	I1128 03:15:34.296231  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:34.296238  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:34.296256  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:34.298202  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:15:34.298220  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:34.298229  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:34.298238  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:34.298250  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:34.298267  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:34.298282  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:34 GMT
	I1128 03:15:34.298294  356731 round_trippers.go:580]     Audit-Id: d87db3ed-a8b0-47ad-84b4-3f48b80f3152
	I1128 03:15:34.298519  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bm5x4","generateName":"kube-proxy-","namespace":"kube-system","uid":"c478a3ff-3c8e-4f10-88c1-2b6f62b1699d","resourceVersion":"730","creationTimestamp":"2023-11-28T03:06:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:06:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1128 03:15:34.298982  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m03
	I1128 03:15:34.298999  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:34.299006  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:34.299012  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:34.300981  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:15:34.300997  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:34.301005  356731 round_trippers.go:580]     Audit-Id: 468054f1-b67a-4ef1-87a6-e61722f00d2f
	I1128 03:15:34.301016  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:34.301023  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:34.301032  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:34.301045  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:34.301055  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:34 GMT
	I1128 03:15:34.301317  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m03","uid":"471d28bb-efb4-436f-9b13-4d96112b9f87","resourceVersion":"894","creationTimestamp":"2023-11-28T03:07:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:07:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I1128 03:15:34.301564  356731 pod_ready.go:92] pod "kube-proxy-bm5x4" in "kube-system" namespace has status "Ready":"True"
	I1128 03:15:34.301581  356731 pod_ready.go:81] duration metric: took 5.41615ms waiting for pod "kube-proxy-bm5x4" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:34.301592  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bmr6b" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:34.301653  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmr6b
	I1128 03:15:34.301662  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:34.301673  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:34.301683  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:34.303424  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:15:34.303446  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:34.303455  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:34 GMT
	I1128 03:15:34.303463  356731 round_trippers.go:580]     Audit-Id: 924ef92d-4e87-474f-a010-68f1d31600e3
	I1128 03:15:34.303479  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:34.303492  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:34.303505  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:34.303513  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:34.303631  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bmr6b","generateName":"kube-proxy-","namespace":"kube-system","uid":"0d9b86f2-025d-424d-a66f-ad3255685aca","resourceVersion":"860","creationTimestamp":"2023-11-28T03:04:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1128 03:15:34.474477  356731 request.go:629] Waited for 170.358424ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:34.474575  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:34.474584  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:34.474596  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:34.474613  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:34.477558  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:34.477592  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:34.477602  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:34 GMT
	I1128 03:15:34.477609  356731 round_trippers.go:580]     Audit-Id: d0ad466c-414a-49d1-b7c0-12980f0747c9
	I1128 03:15:34.477616  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:34.477623  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:34.477629  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:34.477635  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:34.478220  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:34.478557  356731 pod_ready.go:92] pod "kube-proxy-bmr6b" in "kube-system" namespace has status "Ready":"True"
	I1128 03:15:34.478573  356731 pod_ready.go:81] duration metric: took 176.974195ms waiting for pod "kube-proxy-bmr6b" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:34.478583  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jgxjs" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:34.674954  356731 request.go:629] Waited for 196.308355ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgxjs
	I1128 03:15:34.675060  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgxjs
	I1128 03:15:34.675072  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:34.675085  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:34.675099  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:34.678062  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:34.678087  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:34.678097  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:34 GMT
	I1128 03:15:34.678106  356731 round_trippers.go:580]     Audit-Id: c6381b19-9ceb-488d-87a4-d6eb6f14afda
	I1128 03:15:34.678112  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:34.678119  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:34.678126  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:34.678133  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:34.678636  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jgxjs","generateName":"kube-proxy-","namespace":"kube-system","uid":"d8ea73b8-f8e1-4e14-b9cd-4da515a90b3d","resourceVersion":"521","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I1128 03:15:34.874579  356731 request.go:629] Waited for 195.410474ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:15:34.874662  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:15:34.874668  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:34.874679  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:34.874688  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:34.877630  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:15:34.877660  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:34.877671  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:34.877685  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:34.877693  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:34 GMT
	I1128 03:15:34.877698  356731 round_trippers.go:580]     Audit-Id: c8a5e5fa-403c-4857-ab3f-97611918c07c
	I1128 03:15:34.877705  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:34.877710  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:34.877881  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c","resourceVersion":"753","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kube
rnetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:b [truncated 3683 chars]
	I1128 03:15:34.878242  356731 pod_ready.go:92] pod "kube-proxy-jgxjs" in "kube-system" namespace has status "Ready":"True"
	I1128 03:15:34.878263  356731 pod_ready.go:81] duration metric: took 399.67456ms waiting for pod "kube-proxy-jgxjs" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:34.878272  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:35.074748  356731 request.go:629] Waited for 196.372936ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-112998
	I1128 03:15:35.074822  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-112998
	I1128 03:15:35.074829  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:35.074842  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:35.074857  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:35.077941  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:35.077958  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:35.077964  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:35.077970  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:35.077975  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:35.077985  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:35.078000  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:35 GMT
	I1128 03:15:35.078017  356731 round_trippers.go:580]     Audit-Id: 000bdcc3-765a-4ee8-ad4c-7ce021e15346
	I1128 03:15:35.078346  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-112998","namespace":"kube-system","uid":"b32dbcd4-76a8-4b87-b7d8-701f78a8285f","resourceVersion":"875","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"49372038efccb5b42d91203468562dfb","kubernetes.io/config.mirror":"49372038efccb5b42d91203468562dfb","kubernetes.io/config.seen":"2023-11-28T03:04:44.384315431Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1128 03:15:35.274047  356731 request.go:629] Waited for 195.291427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:35.274128  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:15:35.274135  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:35.274144  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:35.274167  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:35.277438  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:35.277457  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:35.277464  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:35.277475  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:35 GMT
	I1128 03:15:35.277480  356731 round_trippers.go:580]     Audit-Id: ee73ec1a-820a-439a-88b2-4e7a6f4e9a5c
	I1128 03:15:35.277485  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:35.277498  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:35.277503  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:35.278288  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 5941 chars]
	I1128 03:15:35.278597  356731 pod_ready.go:92] pod "kube-scheduler-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:15:35.278612  356731 pod_ready.go:81] duration metric: took 400.328899ms waiting for pod "kube-scheduler-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:15:35.278623  356731 pod_ready.go:38] duration metric: took 10.395045241s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 03:15:35.278641  356731 api_server.go:52] waiting for apiserver process to appear ...
	I1128 03:15:35.278692  356731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 03:15:35.299822  356731 command_runner.go:130] > 1076
	I1128 03:15:35.299929  356731 api_server.go:72] duration metric: took 11.798070627s to wait for apiserver process to appear ...
	I1128 03:15:35.299950  356731 api_server.go:88] waiting for apiserver healthz status ...
	I1128 03:15:35.299966  356731 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I1128 03:15:35.305232  356731 api_server.go:279] https://192.168.39.73:8443/healthz returned 200:
	ok
	I1128 03:15:35.305296  356731 round_trippers.go:463] GET https://192.168.39.73:8443/version
	I1128 03:15:35.305304  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:35.305313  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:35.305319  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:35.306777  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:15:35.306795  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:35.306802  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:35.306807  356731 round_trippers.go:580]     Content-Length: 264
	I1128 03:15:35.306812  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:35 GMT
	I1128 03:15:35.306818  356731 round_trippers.go:580]     Audit-Id: 884dd10a-b505-4592-93ac-437e3a762252
	I1128 03:15:35.306823  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:35.306830  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:35.306837  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:35.306860  356731 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1128 03:15:35.306906  356731 api_server.go:141] control plane version: v1.28.4
	I1128 03:15:35.306922  356731 api_server.go:131] duration metric: took 6.965195ms to wait for apiserver health ...
	I1128 03:15:35.306930  356731 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 03:15:35.474371  356731 request.go:629] Waited for 167.359415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods
	I1128 03:15:35.474447  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods
	I1128 03:15:35.474452  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:35.474460  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:35.474466  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:35.479328  356731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:15:35.479362  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:35.479372  356731 round_trippers.go:580]     Audit-Id: 85d70ee1-6da3-4a1b-aa69-efd21693f26f
	I1128 03:15:35.479379  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:35.479386  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:35.479393  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:35.479401  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:35.479409  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:35 GMT
	I1128 03:15:35.480854  356731 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"901"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"881","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81838 chars]
	I1128 03:15:35.483420  356731 system_pods.go:59] 12 kube-system pods found
	I1128 03:15:35.483447  356731 system_pods.go:61] "coredns-5dd5756b68-sd64m" [0d5cae9f-6647-42f9-a8e7-1f14dc9fa422] Running
	I1128 03:15:35.483454  356731 system_pods.go:61] "etcd-multinode-112998" [d09c5f66-0756-4402-ae0e-3b10c34e059c] Running
	I1128 03:15:35.483463  356731 system_pods.go:61] "kindnet-587m7" [1f3794af-43a9-411f-8c8c-edf00787e1dc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1128 03:15:35.483478  356731 system_pods.go:61] "kindnet-5pfcd" [370f4bc7-f3dd-456e-b67a-fff569e42ac1] Running
	I1128 03:15:35.483487  356731 system_pods.go:61] "kindnet-v2g52" [3d07ef2d-2b7b-4766-872e-6a1d8d2ec219] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1128 03:15:35.483494  356731 system_pods.go:61] "kube-apiserver-multinode-112998" [2191c8f0-3de1-4415-9bc9-b5dc50008609] Running
	I1128 03:15:35.483506  356731 system_pods.go:61] "kube-controller-manager-multinode-112998" [9c108920-a3e5-4377-96a3-97a4538555a0] Running
	I1128 03:15:35.483510  356731 system_pods.go:61] "kube-proxy-bm5x4" [c478a3ff-3c8e-4f10-88c1-2b6f62b1699d] Running
	I1128 03:15:35.483514  356731 system_pods.go:61] "kube-proxy-bmr6b" [0d9b86f2-025d-424d-a66f-ad3255685aca] Running
	I1128 03:15:35.483518  356731 system_pods.go:61] "kube-proxy-jgxjs" [d8ea73b8-f8e1-4e14-b9cd-4da515a90b3d] Running
	I1128 03:15:35.483522  356731 system_pods.go:61] "kube-scheduler-multinode-112998" [b32dbcd4-76a8-4b87-b7d8-701f78a8285f] Running
	I1128 03:15:35.483526  356731 system_pods.go:61] "storage-provisioner" [80d85aa0-5ee8-48db-a570-fdde6138e079] Running
	I1128 03:15:35.483535  356731 system_pods.go:74] duration metric: took 176.595487ms to wait for pod list to return data ...
	I1128 03:15:35.483545  356731 default_sa.go:34] waiting for default service account to be created ...
	I1128 03:15:35.673932  356731 request.go:629] Waited for 190.307093ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/default/serviceaccounts
	I1128 03:15:35.674018  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/default/serviceaccounts
	I1128 03:15:35.674025  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:35.674033  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:35.674039  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:35.677215  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:35.677240  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:35.677247  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:35.677253  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:35.677260  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:35.677265  356731 round_trippers.go:580]     Content-Length: 261
	I1128 03:15:35.677271  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:35 GMT
	I1128 03:15:35.677277  356731 round_trippers.go:580]     Audit-Id: ce3f3a52-a5b6-4092-b6d2-847987493e6f
	I1128 03:15:35.677284  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:35.677312  356731 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"901"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5c736962-0501-45a3-be1a-3066e5ff4f01","resourceVersion":"331","creationTimestamp":"2023-11-28T03:04:56Z"}}]}
	I1128 03:15:35.677535  356731 default_sa.go:45] found service account: "default"
	I1128 03:15:35.677555  356731 default_sa.go:55] duration metric: took 194.003944ms for default service account to be created ...
	I1128 03:15:35.677565  356731 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 03:15:35.873997  356731 request.go:629] Waited for 196.349897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods
	I1128 03:15:35.874070  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods
	I1128 03:15:35.874075  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:35.874083  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:35.874092  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:35.878380  356731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:15:35.878410  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:35.878420  356731 round_trippers.go:580]     Audit-Id: 8aed65e0-dbdd-4b8d-b655-9a21247c68a7
	I1128 03:15:35.878429  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:35.878436  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:35.878445  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:35.878453  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:35.878463  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:35 GMT
	I1128 03:15:35.879535  356731 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"901"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"881","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81838 chars]
	I1128 03:15:35.882131  356731 system_pods.go:86] 12 kube-system pods found
	I1128 03:15:35.882160  356731 system_pods.go:89] "coredns-5dd5756b68-sd64m" [0d5cae9f-6647-42f9-a8e7-1f14dc9fa422] Running
	I1128 03:15:35.882165  356731 system_pods.go:89] "etcd-multinode-112998" [d09c5f66-0756-4402-ae0e-3b10c34e059c] Running
	I1128 03:15:35.882171  356731 system_pods.go:89] "kindnet-587m7" [1f3794af-43a9-411f-8c8c-edf00787e1dc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1128 03:15:35.882177  356731 system_pods.go:89] "kindnet-5pfcd" [370f4bc7-f3dd-456e-b67a-fff569e42ac1] Running
	I1128 03:15:35.882186  356731 system_pods.go:89] "kindnet-v2g52" [3d07ef2d-2b7b-4766-872e-6a1d8d2ec219] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1128 03:15:35.882193  356731 system_pods.go:89] "kube-apiserver-multinode-112998" [2191c8f0-3de1-4415-9bc9-b5dc50008609] Running
	I1128 03:15:35.882201  356731 system_pods.go:89] "kube-controller-manager-multinode-112998" [9c108920-a3e5-4377-96a3-97a4538555a0] Running
	I1128 03:15:35.882214  356731 system_pods.go:89] "kube-proxy-bm5x4" [c478a3ff-3c8e-4f10-88c1-2b6f62b1699d] Running
	I1128 03:15:35.882220  356731 system_pods.go:89] "kube-proxy-bmr6b" [0d9b86f2-025d-424d-a66f-ad3255685aca] Running
	I1128 03:15:35.882225  356731 system_pods.go:89] "kube-proxy-jgxjs" [d8ea73b8-f8e1-4e14-b9cd-4da515a90b3d] Running
	I1128 03:15:35.882230  356731 system_pods.go:89] "kube-scheduler-multinode-112998" [b32dbcd4-76a8-4b87-b7d8-701f78a8285f] Running
	I1128 03:15:35.882233  356731 system_pods.go:89] "storage-provisioner" [80d85aa0-5ee8-48db-a570-fdde6138e079] Running
	I1128 03:15:35.882240  356731 system_pods.go:126] duration metric: took 204.662683ms to wait for k8s-apps to be running ...
	I1128 03:15:35.882249  356731 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 03:15:35.882297  356731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 03:15:35.896237  356731 system_svc.go:56] duration metric: took 13.980738ms WaitForService to wait for kubelet.
	I1128 03:15:35.896264  356731 kubeadm.go:581] duration metric: took 12.394412983s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 03:15:35.896282  356731 node_conditions.go:102] verifying NodePressure condition ...
	I1128 03:15:36.074820  356731 request.go:629] Waited for 178.378425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes
	I1128 03:15:36.074892  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes
	I1128 03:15:36.074900  356731 round_trippers.go:469] Request Headers:
	I1128 03:15:36.074911  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:15:36.074921  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:15:36.078141  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:15:36.078161  356731 round_trippers.go:577] Response Headers:
	I1128 03:15:36.078168  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:15:36.078174  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:15:36.078179  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:15:36 GMT
	I1128 03:15:36.078185  356731 round_trippers.go:580]     Audit-Id: 504cf1d9-00ad-4695-9dd3-5e56eecca1b6
	I1128 03:15:36.078192  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:15:36.078200  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:15:36.078449  356731 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"901"},"items":[{"metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"872","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15074 chars]
	I1128 03:15:36.079285  356731 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 03:15:36.079319  356731 node_conditions.go:123] node cpu capacity is 2
	I1128 03:15:36.079335  356731 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 03:15:36.079348  356731 node_conditions.go:123] node cpu capacity is 2
	I1128 03:15:36.079354  356731 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 03:15:36.079369  356731 node_conditions.go:123] node cpu capacity is 2
	I1128 03:15:36.079377  356731 node_conditions.go:105] duration metric: took 183.089594ms to run NodePressure ...
	I1128 03:15:36.079394  356731 start.go:228] waiting for startup goroutines ...
	I1128 03:15:36.079406  356731 start.go:233] waiting for cluster config update ...
	I1128 03:15:36.079420  356731 start.go:242] writing updated cluster config ...
	I1128 03:15:36.080030  356731 config.go:182] Loaded profile config "multinode-112998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:15:36.080174  356731 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/config.json ...
	I1128 03:15:36.083368  356731 out.go:177] * Starting worker node multinode-112998-m02 in cluster multinode-112998
	I1128 03:15:36.084685  356731 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 03:15:36.084711  356731 cache.go:56] Caching tarball of preloaded images
	I1128 03:15:36.084842  356731 preload.go:174] Found /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 03:15:36.084855  356731 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 03:15:36.084973  356731 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/config.json ...
	I1128 03:15:36.085205  356731 start.go:365] acquiring machines lock for multinode-112998-m02: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 03:15:36.085276  356731 start.go:369] acquired machines lock for "multinode-112998-m02" in 38.227µs
	I1128 03:15:36.085298  356731 start.go:96] Skipping create...Using existing machine configuration
	I1128 03:15:36.085308  356731 fix.go:54] fixHost starting: m02
	I1128 03:15:36.085649  356731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:15:36.085681  356731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:15:36.100251  356731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36995
	I1128 03:15:36.100737  356731 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:15:36.101284  356731 main.go:141] libmachine: Using API Version  1
	I1128 03:15:36.101306  356731 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:15:36.101604  356731 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:15:36.101800  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:15:36.101961  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetState
	I1128 03:15:36.103672  356731 fix.go:102] recreateIfNeeded on multinode-112998-m02: state=Running err=<nil>
	W1128 03:15:36.103692  356731 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 03:15:36.105751  356731 out.go:177] * Updating the running kvm2 "multinode-112998-m02" VM ...
	I1128 03:15:36.107026  356731 machine.go:88] provisioning docker machine ...
	I1128 03:15:36.107049  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:15:36.107233  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetMachineName
	I1128 03:15:36.107394  356731 buildroot.go:166] provisioning hostname "multinode-112998-m02"
	I1128 03:15:36.107417  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetMachineName
	I1128 03:15:36.107573  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:15:36.109757  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:15:36.110247  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:15:36.110280  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:15:36.110373  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:15:36.110536  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:15:36.110693  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:15:36.110795  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:15:36.110974  356731 main.go:141] libmachine: Using SSH client type: native
	I1128 03:15:36.111300  356731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1128 03:15:36.111313  356731 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-112998-m02 && echo "multinode-112998-m02" | sudo tee /etc/hostname
	I1128 03:15:36.243679  356731 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-112998-m02
	
	I1128 03:15:36.243714  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:15:36.246421  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:15:36.246876  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:15:36.246926  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:15:36.247154  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:15:36.247411  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:15:36.247645  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:15:36.247795  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:15:36.247935  356731 main.go:141] libmachine: Using SSH client type: native
	I1128 03:15:36.248271  356731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1128 03:15:36.248290  356731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-112998-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-112998-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-112998-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 03:15:36.365850  356731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 03:15:36.365889  356731 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 03:15:36.365924  356731 buildroot.go:174] setting up certificates
	I1128 03:15:36.365934  356731 provision.go:83] configureAuth start
	I1128 03:15:36.365946  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetMachineName
	I1128 03:15:36.366212  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetIP
	I1128 03:15:36.368981  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:15:36.369420  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:15:36.369461  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:15:36.369584  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:15:36.371758  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:15:36.372098  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:15:36.372134  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:15:36.372343  356731 provision.go:138] copyHostCerts
	I1128 03:15:36.372372  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 03:15:36.372399  356731 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 03:15:36.372408  356731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 03:15:36.372472  356731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 03:15:36.372559  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 03:15:36.372581  356731 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 03:15:36.372587  356731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 03:15:36.372614  356731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 03:15:36.372666  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 03:15:36.372681  356731 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 03:15:36.372688  356731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 03:15:36.372708  356731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 03:15:36.372763  356731 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.multinode-112998-m02 san=[192.168.39.31 192.168.39.31 localhost 127.0.0.1 minikube multinode-112998-m02]
	I1128 03:15:36.592806  356731 provision.go:172] copyRemoteCerts
	I1128 03:15:36.592898  356731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 03:15:36.592934  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:15:36.596023  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:15:36.596335  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:15:36.596370  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:15:36.596562  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:15:36.596770  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:15:36.596960  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:15:36.597111  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/id_rsa Username:docker}
	I1128 03:15:36.684259  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1128 03:15:36.684444  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 03:15:36.708634  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1128 03:15:36.708719  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1128 03:15:36.732144  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1128 03:15:36.732221  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 03:15:36.755730  356731 provision.go:86] duration metric: configureAuth took 389.779879ms
	I1128 03:15:36.755767  356731 buildroot.go:189] setting minikube options for container-runtime
	I1128 03:15:36.755988  356731 config.go:182] Loaded profile config "multinode-112998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:15:36.756076  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:15:36.758859  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:15:36.759266  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:15:36.759294  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:15:36.759503  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:15:36.759743  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:15:36.759912  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:15:36.760046  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:15:36.760227  356731 main.go:141] libmachine: Using SSH client type: native
	I1128 03:15:36.760556  356731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1128 03:15:36.760580  356731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 03:17:07.222095  356731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 03:17:07.222142  356731 machine.go:91] provisioned docker machine in 1m31.115098378s
	I1128 03:17:07.222158  356731 start.go:300] post-start starting for "multinode-112998-m02" (driver="kvm2")
	I1128 03:17:07.222173  356731 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 03:17:07.222231  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:17:07.222661  356731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 03:17:07.222705  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:17:07.225676  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:17:07.226130  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:17:07.226167  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:17:07.226350  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:17:07.226550  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:17:07.226680  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:17:07.226839  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/id_rsa Username:docker}
	I1128 03:17:07.315746  356731 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 03:17:07.319868  356731 command_runner.go:130] > NAME=Buildroot
	I1128 03:17:07.319897  356731 command_runner.go:130] > VERSION=2021.02.12-1-g21ec34a-dirty
	I1128 03:17:07.319904  356731 command_runner.go:130] > ID=buildroot
	I1128 03:17:07.319916  356731 command_runner.go:130] > VERSION_ID=2021.02.12
	I1128 03:17:07.319924  356731 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1128 03:17:07.319962  356731 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 03:17:07.319981  356731 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 03:17:07.320077  356731 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 03:17:07.320184  356731 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 03:17:07.320196  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> /etc/ssl/certs/3405152.pem
	I1128 03:17:07.320309  356731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 03:17:07.331038  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 03:17:07.354837  356731 start.go:303] post-start completed in 132.664034ms
	I1128 03:17:07.354887  356731 fix.go:56] fixHost completed within 1m31.269580011s
	I1128 03:17:07.354912  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:17:07.357538  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:17:07.357928  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:17:07.357959  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:17:07.358091  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:17:07.358285  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:17:07.358453  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:17:07.358577  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:17:07.358729  356731 main.go:141] libmachine: Using SSH client type: native
	I1128 03:17:07.359122  356731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.31 22 <nil> <nil>}
	I1128 03:17:07.359134  356731 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 03:17:07.473734  356731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701141427.464746925
	
	I1128 03:17:07.473777  356731 fix.go:206] guest clock: 1701141427.464746925
	I1128 03:17:07.473788  356731 fix.go:219] Guest: 2023-11-28 03:17:07.464746925 +0000 UTC Remote: 2023-11-28 03:17:07.35489215 +0000 UTC m=+450.979399939 (delta=109.854775ms)
	I1128 03:17:07.473812  356731 fix.go:190] guest clock delta is within tolerance: 109.854775ms
	I1128 03:17:07.473820  356731 start.go:83] releasing machines lock for "multinode-112998-m02", held for 1m31.388529616s
	I1128 03:17:07.473881  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:17:07.474185  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetIP
	I1128 03:17:07.476872  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:17:07.477237  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:17:07.477270  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:17:07.479545  356731 out.go:177] * Found network options:
	I1128 03:17:07.481140  356731 out.go:177]   - NO_PROXY=192.168.39.73
	W1128 03:17:07.482559  356731 proxy.go:119] fail to check proxy env: Error ip not in block
	I1128 03:17:07.482585  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:17:07.483148  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:17:07.483349  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:17:07.483452  356731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 03:17:07.483497  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	W1128 03:17:07.483588  356731 proxy.go:119] fail to check proxy env: Error ip not in block
	I1128 03:17:07.483665  356731 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 03:17:07.483690  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:17:07.486031  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:17:07.486249  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:17:07.486424  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:17:07.486453  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:17:07.486608  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:17:07.486756  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:17:07.486784  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:17:07.486834  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:17:07.486944  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:17:07.486971  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:17:07.487131  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/id_rsa Username:docker}
	I1128 03:17:07.487274  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:17:07.487502  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:17:07.487643  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/id_rsa Username:docker}
	I1128 03:17:07.721306  356731 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1128 03:17:07.721363  356731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 03:17:07.727078  356731 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1128 03:17:07.727293  356731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 03:17:07.727365  356731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 03:17:07.735897  356731 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1128 03:17:07.735917  356731 start.go:472] detecting cgroup driver to use...
	I1128 03:17:07.735989  356731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 03:17:07.750669  356731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 03:17:07.763010  356731 docker.go:203] disabling cri-docker service (if available) ...
	I1128 03:17:07.763057  356731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 03:17:07.775970  356731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 03:17:07.788438  356731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 03:17:07.925557  356731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 03:17:08.052566  356731 docker.go:219] disabling docker service ...
	I1128 03:17:08.052636  356731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 03:17:08.067164  356731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 03:17:08.079947  356731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 03:17:08.215584  356731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 03:17:08.350528  356731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 03:17:08.363530  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 03:17:08.380127  356731 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1128 03:17:08.380508  356731 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 03:17:08.380581  356731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:17:08.389843  356731 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 03:17:08.389897  356731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:17:08.399273  356731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:17:08.409303  356731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:17:08.418415  356731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 03:17:08.428006  356731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 03:17:08.436802  356731 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1128 03:17:08.436933  356731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 03:17:08.445959  356731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 03:17:08.577588  356731 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 03:17:15.873644  356731 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.296015348s)
	I1128 03:17:15.873676  356731 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 03:17:15.873725  356731 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 03:17:15.878793  356731 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1128 03:17:15.878826  356731 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1128 03:17:15.878836  356731 command_runner.go:130] > Device: 16h/22d	Inode: 1200        Links: 1
	I1128 03:17:15.878845  356731 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 03:17:15.878851  356731 command_runner.go:130] > Access: 2023-11-28 03:17:15.800246311 +0000
	I1128 03:17:15.878859  356731 command_runner.go:130] > Modify: 2023-11-28 03:17:15.800246311 +0000
	I1128 03:17:15.878867  356731 command_runner.go:130] > Change: 2023-11-28 03:17:15.800246311 +0000
	I1128 03:17:15.878874  356731 command_runner.go:130] >  Birth: -
	I1128 03:17:15.878907  356731 start.go:540] Will wait 60s for crictl version
	I1128 03:17:15.878976  356731 ssh_runner.go:195] Run: which crictl
	I1128 03:17:15.883020  356731 command_runner.go:130] > /usr/bin/crictl
	I1128 03:17:15.883082  356731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 03:17:15.922450  356731 command_runner.go:130] > Version:  0.1.0
	I1128 03:17:15.922481  356731 command_runner.go:130] > RuntimeName:  cri-o
	I1128 03:17:15.922489  356731 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1128 03:17:15.922497  356731 command_runner.go:130] > RuntimeApiVersion:  v1
	I1128 03:17:15.922520  356731 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 03:17:15.922598  356731 ssh_runner.go:195] Run: crio --version
	I1128 03:17:15.971843  356731 command_runner.go:130] > crio version 1.24.1
	I1128 03:17:15.971871  356731 command_runner.go:130] > Version:          1.24.1
	I1128 03:17:15.971878  356731 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 03:17:15.971884  356731 command_runner.go:130] > GitTreeState:     dirty
	I1128 03:17:15.971913  356731 command_runner.go:130] > BuildDate:        2023-11-16T19:10:07Z
	I1128 03:17:15.971922  356731 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 03:17:15.971930  356731 command_runner.go:130] > Compiler:         gc
	I1128 03:17:15.971937  356731 command_runner.go:130] > Platform:         linux/amd64
	I1128 03:17:15.971950  356731 command_runner.go:130] > Linkmode:         dynamic
	I1128 03:17:15.971958  356731 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 03:17:15.971969  356731 command_runner.go:130] > SeccompEnabled:   true
	I1128 03:17:15.971974  356731 command_runner.go:130] > AppArmorEnabled:  false
	I1128 03:17:15.973095  356731 ssh_runner.go:195] Run: crio --version
	I1128 03:17:16.022327  356731 command_runner.go:130] > crio version 1.24.1
	I1128 03:17:16.022361  356731 command_runner.go:130] > Version:          1.24.1
	I1128 03:17:16.022371  356731 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 03:17:16.022380  356731 command_runner.go:130] > GitTreeState:     dirty
	I1128 03:17:16.022389  356731 command_runner.go:130] > BuildDate:        2023-11-16T19:10:07Z
	I1128 03:17:16.022397  356731 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 03:17:16.022405  356731 command_runner.go:130] > Compiler:         gc
	I1128 03:17:16.022411  356731 command_runner.go:130] > Platform:         linux/amd64
	I1128 03:17:16.022419  356731 command_runner.go:130] > Linkmode:         dynamic
	I1128 03:17:16.022430  356731 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 03:17:16.022442  356731 command_runner.go:130] > SeccompEnabled:   true
	I1128 03:17:16.022448  356731 command_runner.go:130] > AppArmorEnabled:  false
	I1128 03:17:16.024623  356731 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 03:17:16.026155  356731 out.go:177]   - env NO_PROXY=192.168.39.73
	I1128 03:17:16.027685  356731 main.go:141] libmachine: (multinode-112998-m02) Calling .GetIP
	I1128 03:17:16.030280  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:17:16.030701  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:17:16.030728  356731 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:17:16.030974  356731 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 03:17:16.034971  356731 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1128 03:17:16.035150  356731 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998 for IP: 192.168.39.31
	I1128 03:17:16.035178  356731 certs.go:190] acquiring lock for shared ca certs: {Name:mk57c0483467fb0022a439f1b546194ca653d1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:17:16.035368  356731 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key
	I1128 03:17:16.035423  356731 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key
	I1128 03:17:16.035441  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1128 03:17:16.035462  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1128 03:17:16.035488  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1128 03:17:16.035507  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1128 03:17:16.035579  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem (1338 bytes)
	W1128 03:17:16.035622  356731 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515_empty.pem, impossibly tiny 0 bytes
	I1128 03:17:16.035638  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 03:17:16.035668  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem (1078 bytes)
	I1128 03:17:16.035705  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem (1123 bytes)
	I1128 03:17:16.035739  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem (1675 bytes)
	I1128 03:17:16.035792  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 03:17:16.035828  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:17:16.035847  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem -> /usr/share/ca-certificates/340515.pem
	I1128 03:17:16.035866  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> /usr/share/ca-certificates/3405152.pem
	I1128 03:17:16.036271  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 03:17:16.059584  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 03:17:16.088924  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 03:17:16.111592  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 03:17:16.134881  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 03:17:16.159242  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem --> /usr/share/ca-certificates/340515.pem (1338 bytes)
	I1128 03:17:16.181465  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /usr/share/ca-certificates/3405152.pem (1708 bytes)
	I1128 03:17:16.204479  356731 ssh_runner.go:195] Run: openssl version
	I1128 03:17:16.210783  356731 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1128 03:17:16.210868  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340515.pem && ln -fs /usr/share/ca-certificates/340515.pem /etc/ssl/certs/340515.pem"
	I1128 03:17:16.222811  356731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340515.pem
	I1128 03:17:16.227486  356731 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 03:17:16.227767  356731 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 03:17:16.227845  356731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340515.pem
	I1128 03:17:16.233480  356731 command_runner.go:130] > 51391683
	I1128 03:17:16.233537  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340515.pem /etc/ssl/certs/51391683.0"
	I1128 03:17:16.242410  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3405152.pem && ln -fs /usr/share/ca-certificates/3405152.pem /etc/ssl/certs/3405152.pem"
	I1128 03:17:16.252489  356731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3405152.pem
	I1128 03:17:16.256766  356731 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 03:17:16.256798  356731 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 03:17:16.256839  356731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3405152.pem
	I1128 03:17:16.262119  356731 command_runner.go:130] > 3ec20f2e
	I1128 03:17:16.262522  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3405152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 03:17:16.272867  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 03:17:16.284190  356731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:17:16.288637  356731 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:17:16.288830  356731 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:17:16.288876  356731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:17:16.294028  356731 command_runner.go:130] > b5213941
	I1128 03:17:16.294437  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 03:17:16.304480  356731 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 03:17:16.308458  356731 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 03:17:16.308642  356731 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 03:17:16.308746  356731 ssh_runner.go:195] Run: crio config
	I1128 03:17:16.371790  356731 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1128 03:17:16.371822  356731 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1128 03:17:16.371832  356731 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1128 03:17:16.371837  356731 command_runner.go:130] > #
	I1128 03:17:16.371853  356731 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1128 03:17:16.371861  356731 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1128 03:17:16.371871  356731 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1128 03:17:16.371887  356731 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1128 03:17:16.371898  356731 command_runner.go:130] > # reload'.
	I1128 03:17:16.371911  356731 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1128 03:17:16.371930  356731 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1128 03:17:16.371947  356731 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1128 03:17:16.371962  356731 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1128 03:17:16.371970  356731 command_runner.go:130] > [crio]
	I1128 03:17:16.371982  356731 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1128 03:17:16.371994  356731 command_runner.go:130] > # containers images, in this directory.
	I1128 03:17:16.372014  356731 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1128 03:17:16.372044  356731 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1128 03:17:16.372057  356731 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1128 03:17:16.372072  356731 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1128 03:17:16.372086  356731 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1128 03:17:16.372098  356731 command_runner.go:130] > storage_driver = "overlay"
	I1128 03:17:16.372110  356731 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1128 03:17:16.372123  356731 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1128 03:17:16.372134  356731 command_runner.go:130] > storage_option = [
	I1128 03:17:16.372146  356731 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1128 03:17:16.372153  356731 command_runner.go:130] > ]
	I1128 03:17:16.372165  356731 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1128 03:17:16.372179  356731 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1128 03:17:16.372218  356731 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1128 03:17:16.372231  356731 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1128 03:17:16.372242  356731 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1128 03:17:16.372253  356731 command_runner.go:130] > # always happen on a node reboot
	I1128 03:17:16.372265  356731 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1128 03:17:16.372277  356731 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1128 03:17:16.372289  356731 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1128 03:17:16.372307  356731 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1128 03:17:16.372319  356731 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1128 03:17:16.372336  356731 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1128 03:17:16.372353  356731 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1128 03:17:16.372365  356731 command_runner.go:130] > # internal_wipe = true
	I1128 03:17:16.372376  356731 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1128 03:17:16.372389  356731 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1128 03:17:16.372401  356731 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1128 03:17:16.372414  356731 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1128 03:17:16.372428  356731 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1128 03:17:16.372440  356731 command_runner.go:130] > [crio.api]
	I1128 03:17:16.372453  356731 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1128 03:17:16.372464  356731 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1128 03:17:16.372477  356731 command_runner.go:130] > # IP address on which the stream server will listen.
	I1128 03:17:16.372487  356731 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1128 03:17:16.372498  356731 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1128 03:17:16.372508  356731 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1128 03:17:16.372519  356731 command_runner.go:130] > # stream_port = "0"
	I1128 03:17:16.372532  356731 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1128 03:17:16.372543  356731 command_runner.go:130] > # stream_enable_tls = false
	I1128 03:17:16.372554  356731 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1128 03:17:16.372565  356731 command_runner.go:130] > # stream_idle_timeout = ""
	I1128 03:17:16.372580  356731 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1128 03:17:16.372594  356731 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1128 03:17:16.372604  356731 command_runner.go:130] > # minutes.
	I1128 03:17:16.372633  356731 command_runner.go:130] > # stream_tls_cert = ""
	I1128 03:17:16.372647  356731 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1128 03:17:16.372661  356731 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1128 03:17:16.372672  356731 command_runner.go:130] > # stream_tls_key = ""
	I1128 03:17:16.372685  356731 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1128 03:17:16.372700  356731 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1128 03:17:16.372713  356731 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1128 03:17:16.372724  356731 command_runner.go:130] > # stream_tls_ca = ""
	I1128 03:17:16.372738  356731 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 03:17:16.372748  356731 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1128 03:17:16.372761  356731 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 03:17:16.372772  356731 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1128 03:17:16.372795  356731 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1128 03:17:16.372808  356731 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1128 03:17:16.372818  356731 command_runner.go:130] > [crio.runtime]
	I1128 03:17:16.372832  356731 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1128 03:17:16.372844  356731 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1128 03:17:16.372852  356731 command_runner.go:130] > # "nofile=1024:2048"
	I1128 03:17:16.372866  356731 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1128 03:17:16.372876  356731 command_runner.go:130] > # default_ulimits = [
	I1128 03:17:16.372907  356731 command_runner.go:130] > # ]
	I1128 03:17:16.372919  356731 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1128 03:17:16.372930  356731 command_runner.go:130] > # no_pivot = false
	I1128 03:17:16.372942  356731 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1128 03:17:16.372956  356731 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1128 03:17:16.372968  356731 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1128 03:17:16.372981  356731 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1128 03:17:16.372993  356731 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1128 03:17:16.373012  356731 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 03:17:16.373024  356731 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1128 03:17:16.373035  356731 command_runner.go:130] > # Cgroup setting for conmon
	I1128 03:17:16.373052  356731 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1128 03:17:16.373062  356731 command_runner.go:130] > conmon_cgroup = "pod"
	I1128 03:17:16.373077  356731 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1128 03:17:16.373089  356731 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1128 03:17:16.373101  356731 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 03:17:16.373110  356731 command_runner.go:130] > conmon_env = [
	I1128 03:17:16.373119  356731 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1128 03:17:16.373126  356731 command_runner.go:130] > ]
	I1128 03:17:16.373140  356731 command_runner.go:130] > # Additional environment variables to set for all the
	I1128 03:17:16.373151  356731 command_runner.go:130] > # containers. These are overridden if set in the
	I1128 03:17:16.373165  356731 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1128 03:17:16.373171  356731 command_runner.go:130] > # default_env = [
	I1128 03:17:16.373176  356731 command_runner.go:130] > # ]
	I1128 03:17:16.373186  356731 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1128 03:17:16.373193  356731 command_runner.go:130] > # selinux = false
	I1128 03:17:16.373207  356731 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1128 03:17:16.373221  356731 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1128 03:17:16.373231  356731 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1128 03:17:16.373242  356731 command_runner.go:130] > # seccomp_profile = ""
	I1128 03:17:16.373251  356731 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1128 03:17:16.373263  356731 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1128 03:17:16.373272  356731 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1128 03:17:16.373276  356731 command_runner.go:130] > # which might increase security.
	I1128 03:17:16.373282  356731 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1128 03:17:16.373292  356731 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1128 03:17:16.373306  356731 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1128 03:17:16.373321  356731 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1128 03:17:16.373335  356731 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1128 03:17:16.373347  356731 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:17:16.373358  356731 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1128 03:17:16.373367  356731 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1128 03:17:16.373376  356731 command_runner.go:130] > # the cgroup blockio controller.
	I1128 03:17:16.373387  356731 command_runner.go:130] > # blockio_config_file = ""
	I1128 03:17:16.373401  356731 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1128 03:17:16.373411  356731 command_runner.go:130] > # irqbalance daemon.
	I1128 03:17:16.373423  356731 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1128 03:17:16.373434  356731 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1128 03:17:16.373446  356731 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:17:16.373482  356731 command_runner.go:130] > # rdt_config_file = ""
	I1128 03:17:16.373498  356731 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1128 03:17:16.373506  356731 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1128 03:17:16.373519  356731 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1128 03:17:16.373530  356731 command_runner.go:130] > # separate_pull_cgroup = ""
	I1128 03:17:16.373540  356731 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1128 03:17:16.373554  356731 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1128 03:17:16.373564  356731 command_runner.go:130] > # will be added.
	I1128 03:17:16.373572  356731 command_runner.go:130] > # default_capabilities = [
	I1128 03:17:16.373577  356731 command_runner.go:130] > # 	"CHOWN",
	I1128 03:17:16.373586  356731 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1128 03:17:16.373592  356731 command_runner.go:130] > # 	"FSETID",
	I1128 03:17:16.373602  356731 command_runner.go:130] > # 	"FOWNER",
	I1128 03:17:16.373608  356731 command_runner.go:130] > # 	"SETGID",
	I1128 03:17:16.373616  356731 command_runner.go:130] > # 	"SETUID",
	I1128 03:17:16.373626  356731 command_runner.go:130] > # 	"SETPCAP",
	I1128 03:17:16.373634  356731 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1128 03:17:16.373643  356731 command_runner.go:130] > # 	"KILL",
	I1128 03:17:16.373650  356731 command_runner.go:130] > # ]
	I1128 03:17:16.373664  356731 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1128 03:17:16.373676  356731 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 03:17:16.373684  356731 command_runner.go:130] > # default_sysctls = [
	I1128 03:17:16.373693  356731 command_runner.go:130] > # ]
	I1128 03:17:16.373700  356731 command_runner.go:130] > # List of devices on the host that a
	I1128 03:17:16.373716  356731 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1128 03:17:16.373727  356731 command_runner.go:130] > # allowed_devices = [
	I1128 03:17:16.373734  356731 command_runner.go:130] > # 	"/dev/fuse",
	I1128 03:17:16.373740  356731 command_runner.go:130] > # ]
	I1128 03:17:16.373752  356731 command_runner.go:130] > # List of additional devices. specified as
	I1128 03:17:16.373765  356731 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1128 03:17:16.373778  356731 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1128 03:17:16.373805  356731 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 03:17:16.373816  356731 command_runner.go:130] > # additional_devices = [
	I1128 03:17:16.373821  356731 command_runner.go:130] > # ]
	I1128 03:17:16.373834  356731 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1128 03:17:16.373844  356731 command_runner.go:130] > # cdi_spec_dirs = [
	I1128 03:17:16.373854  356731 command_runner.go:130] > # 	"/etc/cdi",
	I1128 03:17:16.373861  356731 command_runner.go:130] > # 	"/var/run/cdi",
	I1128 03:17:16.373870  356731 command_runner.go:130] > # ]
	I1128 03:17:16.373881  356731 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1128 03:17:16.373894  356731 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1128 03:17:16.373904  356731 command_runner.go:130] > # Defaults to false.
	I1128 03:17:16.373915  356731 command_runner.go:130] > # device_ownership_from_security_context = false
	I1128 03:17:16.373930  356731 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1128 03:17:16.373941  356731 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1128 03:17:16.373949  356731 command_runner.go:130] > # hooks_dir = [
	I1128 03:17:16.373963  356731 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1128 03:17:16.373973  356731 command_runner.go:130] > # ]
	I1128 03:17:16.373983  356731 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1128 03:17:16.373994  356731 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1128 03:17:16.374008  356731 command_runner.go:130] > # its default mounts from the following two files:
	I1128 03:17:16.374017  356731 command_runner.go:130] > #
	I1128 03:17:16.374030  356731 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1128 03:17:16.374043  356731 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1128 03:17:16.374055  356731 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1128 03:17:16.374062  356731 command_runner.go:130] > #
	I1128 03:17:16.374073  356731 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1128 03:17:16.374087  356731 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1128 03:17:16.374098  356731 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1128 03:17:16.374111  356731 command_runner.go:130] > #      only add mounts it finds in this file.
	I1128 03:17:16.374119  356731 command_runner.go:130] > #
	I1128 03:17:16.374129  356731 command_runner.go:130] > # default_mounts_file = ""
	I1128 03:17:16.374139  356731 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1128 03:17:16.374155  356731 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1128 03:17:16.374189  356731 command_runner.go:130] > pids_limit = 1024
	I1128 03:17:16.374201  356731 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1128 03:17:16.374213  356731 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1128 03:17:16.374227  356731 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1128 03:17:16.374243  356731 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1128 03:17:16.374254  356731 command_runner.go:130] > # log_size_max = -1
	I1128 03:17:16.374268  356731 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1128 03:17:16.374278  356731 command_runner.go:130] > # log_to_journald = false
	I1128 03:17:16.374292  356731 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1128 03:17:16.374303  356731 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1128 03:17:16.374314  356731 command_runner.go:130] > # Path to directory for container attach sockets.
	I1128 03:17:16.374322  356731 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1128 03:17:16.374333  356731 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1128 03:17:16.374342  356731 command_runner.go:130] > # bind_mount_prefix = ""
	I1128 03:17:16.374351  356731 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1128 03:17:16.374361  356731 command_runner.go:130] > # read_only = false
	I1128 03:17:16.374371  356731 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1128 03:17:16.374384  356731 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1128 03:17:16.374393  356731 command_runner.go:130] > # live configuration reload.
	I1128 03:17:16.374400  356731 command_runner.go:130] > # log_level = "info"
	I1128 03:17:16.374411  356731 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1128 03:17:16.374419  356731 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:17:16.374429  356731 command_runner.go:130] > # log_filter = ""
	I1128 03:17:16.374438  356731 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1128 03:17:16.374453  356731 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1128 03:17:16.374463  356731 command_runner.go:130] > # separated by comma.
	I1128 03:17:16.374469  356731 command_runner.go:130] > # uid_mappings = ""
	I1128 03:17:16.374481  356731 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1128 03:17:16.374492  356731 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1128 03:17:16.374501  356731 command_runner.go:130] > # separated by comma.
	I1128 03:17:16.374508  356731 command_runner.go:130] > # gid_mappings = ""
	I1128 03:17:16.374520  356731 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1128 03:17:16.374538  356731 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 03:17:16.374554  356731 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 03:17:16.374564  356731 command_runner.go:130] > # minimum_mappable_uid = -1
	I1128 03:17:16.374576  356731 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1128 03:17:16.374587  356731 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 03:17:16.374596  356731 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 03:17:16.374601  356731 command_runner.go:130] > # minimum_mappable_gid = -1
	I1128 03:17:16.374609  356731 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1128 03:17:16.374615  356731 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1128 03:17:16.374624  356731 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1128 03:17:16.374628  356731 command_runner.go:130] > # ctr_stop_timeout = 30
	I1128 03:17:16.374636  356731 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1128 03:17:16.374642  356731 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1128 03:17:16.374649  356731 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1128 03:17:16.374654  356731 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1128 03:17:16.374661  356731 command_runner.go:130] > drop_infra_ctr = false
	I1128 03:17:16.374667  356731 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1128 03:17:16.374674  356731 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1128 03:17:16.374681  356731 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1128 03:17:16.374688  356731 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1128 03:17:16.374694  356731 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1128 03:17:16.374701  356731 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1128 03:17:16.374706  356731 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1128 03:17:16.374713  356731 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1128 03:17:16.374719  356731 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1128 03:17:16.374729  356731 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1128 03:17:16.374742  356731 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1128 03:17:16.374755  356731 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1128 03:17:16.374765  356731 command_runner.go:130] > # default_runtime = "runc"
	I1128 03:17:16.374773  356731 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1128 03:17:16.374788  356731 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1128 03:17:16.374805  356731 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1128 03:17:16.374816  356731 command_runner.go:130] > # creation as a file is not desired either.
	I1128 03:17:16.374832  356731 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1128 03:17:16.374843  356731 command_runner.go:130] > # the hostname is being managed dynamically.
	I1128 03:17:16.374851  356731 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1128 03:17:16.374861  356731 command_runner.go:130] > # ]
	I1128 03:17:16.374871  356731 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1128 03:17:16.374984  356731 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1128 03:17:16.375012  356731 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1128 03:17:16.375027  356731 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1128 03:17:16.375036  356731 command_runner.go:130] > #
	I1128 03:17:16.375046  356731 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1128 03:17:16.375058  356731 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1128 03:17:16.375069  356731 command_runner.go:130] > #  runtime_type = "oci"
	I1128 03:17:16.375078  356731 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1128 03:17:16.375089  356731 command_runner.go:130] > #  privileged_without_host_devices = false
	I1128 03:17:16.375098  356731 command_runner.go:130] > #  allowed_annotations = []
	I1128 03:17:16.375106  356731 command_runner.go:130] > # Where:
	I1128 03:17:16.375115  356731 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1128 03:17:16.375128  356731 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1128 03:17:16.375138  356731 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1128 03:17:16.375151  356731 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1128 03:17:16.375160  356731 command_runner.go:130] > #   in $PATH.
	I1128 03:17:16.375171  356731 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1128 03:17:16.375183  356731 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1128 03:17:16.375198  356731 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1128 03:17:16.375208  356731 command_runner.go:130] > #   state.
	I1128 03:17:16.375221  356731 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1128 03:17:16.375235  356731 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1128 03:17:16.375244  356731 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1128 03:17:16.375257  356731 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1128 03:17:16.375270  356731 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1128 03:17:16.375283  356731 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1128 03:17:16.375293  356731 command_runner.go:130] > #   The currently recognized values are:
	I1128 03:17:16.375306  356731 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1128 03:17:16.375321  356731 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1128 03:17:16.375333  356731 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1128 03:17:16.375343  356731 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1128 03:17:16.375357  356731 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1128 03:17:16.375371  356731 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1128 03:17:16.375385  356731 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1128 03:17:16.375402  356731 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1128 03:17:16.375413  356731 command_runner.go:130] > #   should be moved to the container's cgroup
	I1128 03:17:16.375423  356731 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1128 03:17:16.375430  356731 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1128 03:17:16.375438  356731 command_runner.go:130] > runtime_type = "oci"
	I1128 03:17:16.375448  356731 command_runner.go:130] > runtime_root = "/run/runc"
	I1128 03:17:16.375455  356731 command_runner.go:130] > runtime_config_path = ""
	I1128 03:17:16.375463  356731 command_runner.go:130] > monitor_path = ""
	I1128 03:17:16.375467  356731 command_runner.go:130] > monitor_cgroup = ""
	I1128 03:17:16.375474  356731 command_runner.go:130] > monitor_exec_cgroup = ""
	I1128 03:17:16.375481  356731 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1128 03:17:16.375487  356731 command_runner.go:130] > # running containers
	I1128 03:17:16.375492  356731 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1128 03:17:16.375499  356731 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1128 03:17:16.375532  356731 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1128 03:17:16.375540  356731 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1128 03:17:16.375545  356731 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1128 03:17:16.375550  356731 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1128 03:17:16.375557  356731 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1128 03:17:16.375562  356731 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1128 03:17:16.375570  356731 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1128 03:17:16.375574  356731 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1128 03:17:16.375581  356731 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1128 03:17:16.375589  356731 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1128 03:17:16.375595  356731 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1128 03:17:16.375605  356731 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1128 03:17:16.375612  356731 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1128 03:17:16.375620  356731 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1128 03:17:16.375629  356731 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1128 03:17:16.375639  356731 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1128 03:17:16.375644  356731 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1128 03:17:16.375652  356731 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1128 03:17:16.375658  356731 command_runner.go:130] > # Example:
	I1128 03:17:16.375663  356731 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1128 03:17:16.375668  356731 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1128 03:17:16.375673  356731 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1128 03:17:16.375714  356731 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1128 03:17:16.375720  356731 command_runner.go:130] > # cpuset = 0
	I1128 03:17:16.375724  356731 command_runner.go:130] > # cpushares = "0-1"
	I1128 03:17:16.375730  356731 command_runner.go:130] > # Where:
	I1128 03:17:16.375735  356731 command_runner.go:130] > # The workload name is workload-type.
	I1128 03:17:16.375742  356731 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1128 03:17:16.375750  356731 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1128 03:17:16.375756  356731 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1128 03:17:16.375766  356731 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1128 03:17:16.375775  356731 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1128 03:17:16.375779  356731 command_runner.go:130] > # 
	I1128 03:17:16.375787  356731 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1128 03:17:16.375791  356731 command_runner.go:130] > #
	I1128 03:17:16.375798  356731 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1128 03:17:16.375805  356731 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1128 03:17:16.375813  356731 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1128 03:17:16.375819  356731 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1128 03:17:16.375825  356731 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1128 03:17:16.375829  356731 command_runner.go:130] > [crio.image]
	I1128 03:17:16.375835  356731 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1128 03:17:16.375842  356731 command_runner.go:130] > # default_transport = "docker://"
	I1128 03:17:16.375848  356731 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1128 03:17:16.375856  356731 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1128 03:17:16.375861  356731 command_runner.go:130] > # global_auth_file = ""
	I1128 03:17:16.375868  356731 command_runner.go:130] > # The image used to instantiate infra containers.
	I1128 03:17:16.375874  356731 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:17:16.375881  356731 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1128 03:17:16.375888  356731 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1128 03:17:16.375896  356731 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1128 03:17:16.375910  356731 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:17:16.375917  356731 command_runner.go:130] > # pause_image_auth_file = ""
	I1128 03:17:16.375922  356731 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1128 03:17:16.375928  356731 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1128 03:17:16.375935  356731 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1128 03:17:16.375941  356731 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1128 03:17:16.375947  356731 command_runner.go:130] > # pause_command = "/pause"
	I1128 03:17:16.375954  356731 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1128 03:17:16.375962  356731 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1128 03:17:16.375969  356731 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1128 03:17:16.375977  356731 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1128 03:17:16.375983  356731 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1128 03:17:16.375989  356731 command_runner.go:130] > # signature_policy = ""
	I1128 03:17:16.375995  356731 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1128 03:17:16.376004  356731 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1128 03:17:16.376008  356731 command_runner.go:130] > # changing them here.
	I1128 03:17:16.376012  356731 command_runner.go:130] > # insecure_registries = [
	I1128 03:17:16.376018  356731 command_runner.go:130] > # ]
	I1128 03:17:16.376024  356731 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1128 03:17:16.376029  356731 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1128 03:17:16.376038  356731 command_runner.go:130] > # image_volumes = "mkdir"
	I1128 03:17:16.376044  356731 command_runner.go:130] > # Temporary directory to use for storing big files
	I1128 03:17:16.376050  356731 command_runner.go:130] > # big_files_temporary_dir = ""
	I1128 03:17:16.376058  356731 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1128 03:17:16.376064  356731 command_runner.go:130] > # CNI plugins.
	I1128 03:17:16.376068  356731 command_runner.go:130] > [crio.network]
	I1128 03:17:16.376074  356731 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1128 03:17:16.376080  356731 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1128 03:17:16.376085  356731 command_runner.go:130] > # cni_default_network = ""
	I1128 03:17:16.376092  356731 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1128 03:17:16.376102  356731 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1128 03:17:16.376114  356731 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1128 03:17:16.376122  356731 command_runner.go:130] > # plugin_dirs = [
	I1128 03:17:16.376142  356731 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1128 03:17:16.376151  356731 command_runner.go:130] > # ]
	I1128 03:17:16.376161  356731 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1128 03:17:16.376171  356731 command_runner.go:130] > [crio.metrics]
	I1128 03:17:16.376179  356731 command_runner.go:130] > # Globally enable or disable metrics support.
	I1128 03:17:16.376188  356731 command_runner.go:130] > enable_metrics = true
	I1128 03:17:16.376196  356731 command_runner.go:130] > # Specify enabled metrics collectors.
	I1128 03:17:16.376207  356731 command_runner.go:130] > # Per default all metrics are enabled.
	I1128 03:17:16.376214  356731 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1128 03:17:16.376225  356731 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1128 03:17:16.376232  356731 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1128 03:17:16.376238  356731 command_runner.go:130] > # metrics_collectors = [
	I1128 03:17:16.376242  356731 command_runner.go:130] > # 	"operations",
	I1128 03:17:16.376249  356731 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1128 03:17:16.376254  356731 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1128 03:17:16.376259  356731 command_runner.go:130] > # 	"operations_errors",
	I1128 03:17:16.376263  356731 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1128 03:17:16.376270  356731 command_runner.go:130] > # 	"image_pulls_by_name",
	I1128 03:17:16.376685  356731 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1128 03:17:16.376701  356731 command_runner.go:130] > # 	"image_pulls_failures",
	I1128 03:17:16.376709  356731 command_runner.go:130] > # 	"image_pulls_successes",
	I1128 03:17:16.376715  356731 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1128 03:17:16.376723  356731 command_runner.go:130] > # 	"image_layer_reuse",
	I1128 03:17:16.376730  356731 command_runner.go:130] > # 	"containers_oom_total",
	I1128 03:17:16.376744  356731 command_runner.go:130] > # 	"containers_oom",
	I1128 03:17:16.376752  356731 command_runner.go:130] > # 	"processes_defunct",
	I1128 03:17:16.376764  356731 command_runner.go:130] > # 	"operations_total",
	I1128 03:17:16.376774  356731 command_runner.go:130] > # 	"operations_latency_seconds",
	I1128 03:17:16.376785  356731 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1128 03:17:16.376797  356731 command_runner.go:130] > # 	"operations_errors_total",
	I1128 03:17:16.376808  356731 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1128 03:17:16.376819  356731 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1128 03:17:16.376831  356731 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1128 03:17:16.376841  356731 command_runner.go:130] > # 	"image_pulls_success_total",
	I1128 03:17:16.376849  356731 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1128 03:17:16.376861  356731 command_runner.go:130] > # 	"containers_oom_count_total",
	I1128 03:17:16.376870  356731 command_runner.go:130] > # ]
	I1128 03:17:16.376898  356731 command_runner.go:130] > # The port on which the metrics server will listen.
	I1128 03:17:16.376909  356731 command_runner.go:130] > # metrics_port = 9090
	I1128 03:17:16.376918  356731 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1128 03:17:16.376929  356731 command_runner.go:130] > # metrics_socket = ""
	I1128 03:17:16.376941  356731 command_runner.go:130] > # The certificate for the secure metrics server.
	I1128 03:17:16.376956  356731 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1128 03:17:16.376970  356731 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1128 03:17:16.376982  356731 command_runner.go:130] > # certificate on any modification event.
	I1128 03:17:16.376992  356731 command_runner.go:130] > # metrics_cert = ""
	I1128 03:17:16.377006  356731 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1128 03:17:16.377017  356731 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1128 03:17:16.377025  356731 command_runner.go:130] > # metrics_key = ""
	I1128 03:17:16.377039  356731 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1128 03:17:16.377049  356731 command_runner.go:130] > [crio.tracing]
	I1128 03:17:16.377062  356731 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1128 03:17:16.377072  356731 command_runner.go:130] > # enable_tracing = false
	I1128 03:17:16.377082  356731 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1128 03:17:16.377093  356731 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1128 03:17:16.377105  356731 command_runner.go:130] > # Number of samples to collect per million spans.
	I1128 03:17:16.377116  356731 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1128 03:17:16.377127  356731 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1128 03:17:16.377138  356731 command_runner.go:130] > [crio.stats]
	I1128 03:17:16.377152  356731 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1128 03:17:16.377165  356731 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1128 03:17:16.377176  356731 command_runner.go:130] > # stats_collection_period = 0
	I1128 03:17:16.378150  356731 command_runner.go:130] ! time="2023-11-28 03:17:16.359969149Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1128 03:17:16.378170  356731 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1128 03:17:16.378519  356731 cni.go:84] Creating CNI manager for ""
	I1128 03:17:16.378534  356731 cni.go:136] 3 nodes found, recommending kindnet
	I1128 03:17:16.378548  356731 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 03:17:16.378586  356731 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.31 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-112998 NodeName:multinode-112998-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.31 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 03:17:16.378732  356731 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.31
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-112998-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.31
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 03:17:16.378826  356731 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-112998-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.31
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-112998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 03:17:16.378892  356731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 03:17:16.389854  356731 command_runner.go:130] > kubeadm
	I1128 03:17:16.389872  356731 command_runner.go:130] > kubectl
	I1128 03:17:16.389878  356731 command_runner.go:130] > kubelet
	I1128 03:17:16.390075  356731 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 03:17:16.390134  356731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1128 03:17:16.400591  356731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1128 03:17:16.418384  356731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 03:17:16.437095  356731 ssh_runner.go:195] Run: grep 192.168.39.73	control-plane.minikube.internal$ /etc/hosts
	I1128 03:17:16.441117  356731 command_runner.go:130] > 192.168.39.73	control-plane.minikube.internal
	I1128 03:17:16.441189  356731 host.go:66] Checking if "multinode-112998" exists ...
	I1128 03:17:16.441526  356731 config.go:182] Loaded profile config "multinode-112998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:17:16.441575  356731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:17:16.441621  356731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:17:16.456341  356731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I1128 03:17:16.456828  356731 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:17:16.457361  356731 main.go:141] libmachine: Using API Version  1
	I1128 03:17:16.457386  356731 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:17:16.457717  356731 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:17:16.457890  356731 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:17:16.458124  356731 start.go:304] JoinCluster: &{Name:multinode-112998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-112998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.192 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false i
ngress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 03:17:16.458246  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1128 03:17:16.458269  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:17:16.461199  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:17:16.461565  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:17:16.461602  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:17:16.461727  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:17:16.461895  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:17:16.462060  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:17:16.462252  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:17:16.641555  356731 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 3rci1m.1zan0rur510a1s1w --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 03:17:16.641602  356731 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.31 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1128 03:17:16.641655  356731 host.go:66] Checking if "multinode-112998" exists ...
	I1128 03:17:16.641945  356731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:17:16.641978  356731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:17:16.656519  356731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36097
	I1128 03:17:16.657006  356731 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:17:16.657486  356731 main.go:141] libmachine: Using API Version  1
	I1128 03:17:16.657507  356731 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:17:16.657847  356731 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:17:16.658058  356731 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:17:16.658271  356731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-112998-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1128 03:17:16.658294  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:17:16.661237  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:17:16.661658  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:17:16.661679  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:17:16.661813  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:17:16.661978  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:17:16.662137  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:17:16.662271  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:17:16.885229  356731 command_runner.go:130] > node/multinode-112998-m02 cordoned
	I1128 03:17:19.927056  356731 command_runner.go:130] > pod "busybox-5bc68d56bd-cbjtg" has DeletionTimestamp older than 1 seconds, skipping
	I1128 03:17:19.927084  356731 command_runner.go:130] > node/multinode-112998-m02 drained
	I1128 03:17:19.928653  356731 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1128 03:17:19.928678  356731 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-v2g52, kube-system/kube-proxy-jgxjs
	I1128 03:17:19.928711  356731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-112998-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.270407355s)
	I1128 03:17:19.928728  356731 node.go:108] successfully drained node "m02"
	I1128 03:17:19.929158  356731 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:17:19.929403  356731 kapi.go:59] client config for multinode-112998: &rest.Config{Host:"https://192.168.39.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 03:17:19.929812  356731 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1128 03:17:19.929877  356731 round_trippers.go:463] DELETE https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:17:19.929887  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:19.929899  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:19.929910  356731 round_trippers.go:473]     Content-Type: application/json
	I1128 03:17:19.929923  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:19.942849  356731 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1128 03:17:19.942869  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:19.942875  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:19.942881  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:19.942886  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:19.942891  356731 round_trippers.go:580]     Content-Length: 171
	I1128 03:17:19.942895  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:19 GMT
	I1128 03:17:19.942900  356731 round_trippers.go:580]     Audit-Id: cb1293f0-88c7-43a1-a751-7f98fea6cf9f
	I1128 03:17:19.942905  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:19.943130  356731 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-112998-m02","kind":"nodes","uid":"e3d7b5be-85ae-4210-986b-2b91a250ca8c"}}
	I1128 03:17:19.943184  356731 node.go:124] successfully deleted node "m02"
	I1128 03:17:19.943197  356731 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.31 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1128 03:17:19.943224  356731 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.31 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1128 03:17:19.943251  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 3rci1m.1zan0rur510a1s1w --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-112998-m02"
	I1128 03:17:20.008751  356731 command_runner.go:130] > [preflight] Running pre-flight checks
	I1128 03:17:20.191418  356731 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1128 03:17:20.191477  356731 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1128 03:17:20.256160  356731 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 03:17:20.256189  356731 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 03:17:20.256195  356731 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1128 03:17:20.410827  356731 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1128 03:17:20.933306  356731 command_runner.go:130] > This node has joined the cluster:
	I1128 03:17:20.933337  356731 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1128 03:17:20.933344  356731 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1128 03:17:20.933350  356731 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1128 03:17:20.935965  356731 command_runner.go:130] ! W1128 03:17:19.999628    2594 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1128 03:17:20.935991  356731 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1128 03:17:20.936003  356731 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1128 03:17:20.936016  356731 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1128 03:17:20.936037  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1128 03:17:21.250399  356731 start.go:306] JoinCluster complete in 4.79227018s
	I1128 03:17:21.250432  356731 cni.go:84] Creating CNI manager for ""
	I1128 03:17:21.250442  356731 cni.go:136] 3 nodes found, recommending kindnet
	I1128 03:17:21.250516  356731 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 03:17:21.258258  356731 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1128 03:17:21.258286  356731 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1128 03:17:21.258296  356731 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1128 03:17:21.258304  356731 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 03:17:21.258313  356731 command_runner.go:130] > Access: 2023-11-28 03:14:47.716335571 +0000
	I1128 03:17:21.258320  356731 command_runner.go:130] > Modify: 2023-11-16 19:19:18.000000000 +0000
	I1128 03:17:21.258328  356731 command_runner.go:130] > Change: 2023-11-28 03:14:45.792335571 +0000
	I1128 03:17:21.258337  356731 command_runner.go:130] >  Birth: -
	I1128 03:17:21.258510  356731 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 03:17:21.258529  356731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 03:17:21.278980  356731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 03:17:21.651828  356731 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1128 03:17:21.655885  356731 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1128 03:17:21.660977  356731 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1128 03:17:21.676103  356731 command_runner.go:130] > daemonset.apps/kindnet configured
	I1128 03:17:21.680372  356731 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:17:21.680591  356731 kapi.go:59] client config for multinode-112998: &rest.Config{Host:"https://192.168.39.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 03:17:21.680917  356731 round_trippers.go:463] GET https://192.168.39.73:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1128 03:17:21.680934  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:21.680945  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:21.680952  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:21.683598  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:17:21.683613  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:21.683620  356731 round_trippers.go:580]     Audit-Id: e360328f-a762-46f8-a353-16bd96be47f3
	I1128 03:17:21.683629  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:21.683634  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:21.683639  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:21.683645  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:21.683650  356731 round_trippers.go:580]     Content-Length: 291
	I1128 03:17:21.683658  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:21 GMT
	I1128 03:17:21.683685  356731 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"722e10cd-af13-449a-984b-faf3aaa4e33e","resourceVersion":"899","creationTimestamp":"2023-11-28T03:04:44Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1128 03:17:21.683769  356731 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-112998" context rescaled to 1 replicas
	I1128 03:17:21.683799  356731 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.31 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1128 03:17:21.685955  356731 out.go:177] * Verifying Kubernetes components...
	I1128 03:17:21.687688  356731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 03:17:21.701352  356731 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:17:21.701572  356731 kapi.go:59] client config for multinode-112998: &rest.Config{Host:"https://192.168.39.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 03:17:21.701801  356731 node_ready.go:35] waiting up to 6m0s for node "multinode-112998-m02" to be "Ready" ...
	I1128 03:17:21.701864  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:17:21.701868  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:21.701876  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:21.701882  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:21.704738  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:17:21.704758  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:21.704768  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:21.704776  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:21 GMT
	I1128 03:17:21.704783  356731 round_trippers.go:580]     Audit-Id: e0e789c2-d065-4888-98df-c9c4b30331d8
	I1128 03:17:21.704790  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:21.704798  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:21.704810  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:21.705238  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"25a285c1-84a3-4258-9cf7-d6faf52fd6b2","resourceVersion":"1045","creationTimestamp":"2023-11-28T03:17:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:17:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:17:20Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1128 03:17:21.705556  356731 node_ready.go:49] node "multinode-112998-m02" has status "Ready":"True"
	I1128 03:17:21.705572  356731 node_ready.go:38] duration metric: took 3.755315ms waiting for node "multinode-112998-m02" to be "Ready" ...
	I1128 03:17:21.705580  356731 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 03:17:21.705637  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods
	I1128 03:17:21.705645  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:21.705652  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:21.705658  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:21.712582  356731 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1128 03:17:21.712602  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:21.712612  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:21.712622  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:21.712631  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:21 GMT
	I1128 03:17:21.712641  356731 round_trippers.go:580]     Audit-Id: 1813f76d-017c-4020-a72b-21ea1c9831e8
	I1128 03:17:21.712648  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:21.712654  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:21.714440  356731 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1055"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"881","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82198 chars]
	I1128 03:17:21.717942  356731 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:21.718069  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:17:21.718084  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:21.718094  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:21.718105  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:21.720600  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:17:21.720617  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:21.720623  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:21.720629  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:21 GMT
	I1128 03:17:21.720634  356731 round_trippers.go:580]     Audit-Id: 2768a7d0-4e16-4392-a9aa-1eca29948c97
	I1128 03:17:21.720639  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:21.720644  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:21.720649  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:21.720960  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"881","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1128 03:17:21.721341  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:17:21.721354  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:21.721361  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:21.721367  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:21.723977  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:17:21.723994  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:21.724003  356731 round_trippers.go:580]     Audit-Id: 8b041b46-198a-4691-93a3-65a476cfd626
	I1128 03:17:21.724012  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:21.724021  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:21.724030  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:21.724039  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:21.724049  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:21 GMT
	I1128 03:17:21.724201  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"911","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1128 03:17:21.724456  356731 pod_ready.go:92] pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace has status "Ready":"True"
	I1128 03:17:21.724470  356731 pod_ready.go:81] duration metric: took 6.502641ms waiting for pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:21.724479  356731 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:21.724521  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-112998
	I1128 03:17:21.724529  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:21.724535  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:21.724541  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:21.726556  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:17:21.726573  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:21.726581  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:21 GMT
	I1128 03:17:21.726589  356731 round_trippers.go:580]     Audit-Id: fde9adcb-87c8-442c-b7d5-c9b92ab544e6
	I1128 03:17:21.726596  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:21.726605  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:21.726614  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:21.726623  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:21.726800  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-112998","namespace":"kube-system","uid":"d09c5f66-0756-4402-ae0e-3b10c34e059c","resourceVersion":"874","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.73:2379","kubernetes.io/config.hash":"424bc6684b5cae600504832fd6cb287f","kubernetes.io/config.mirror":"424bc6684b5cae600504832fd6cb287f","kubernetes.io/config.seen":"2023-11-28T03:04:44.384307907Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1128 03:17:21.727157  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:17:21.727170  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:21.727178  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:21.727186  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:21.730163  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:17:21.730179  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:21.730188  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:21.730196  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:21.730204  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:21.730217  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:21 GMT
	I1128 03:17:21.730228  356731 round_trippers.go:580]     Audit-Id: ed702a56-2e51-4d20-8310-bf3122e71cfe
	I1128 03:17:21.730238  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:21.730433  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"911","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1128 03:17:21.730721  356731 pod_ready.go:92] pod "etcd-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:17:21.730735  356731 pod_ready.go:81] duration metric: took 6.246957ms waiting for pod "etcd-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:21.730753  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:21.730811  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:17:21.730819  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:21.730826  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:21.730834  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:21.734877  356731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:17:21.734892  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:21.734905  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:21.734914  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:21 GMT
	I1128 03:17:21.734921  356731 round_trippers.go:580]     Audit-Id: 7fc35a2c-449b-44b8-a079-bc2327bbd31d
	I1128 03:17:21.734929  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:21.734937  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:21.734946  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:21.735136  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"901","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1128 03:17:21.735477  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:17:21.735491  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:21.735501  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:21.735510  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:21.739384  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:17:21.739399  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:21.739405  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:21 GMT
	I1128 03:17:21.739413  356731 round_trippers.go:580]     Audit-Id: d81b5453-7a98-4c8a-8c25-c6c02acf5971
	I1128 03:17:21.739421  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:21.739430  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:21.739439  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:21.739448  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:21.739646  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"911","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1128 03:17:21.739941  356731 pod_ready.go:92] pod "kube-apiserver-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:17:21.739955  356731 pod_ready.go:81] duration metric: took 9.190754ms waiting for pod "kube-apiserver-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:21.739962  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:21.740018  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-112998
	I1128 03:17:21.740029  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:21.740039  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:21.740046  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:21.742525  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:17:21.742544  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:21.742553  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:21.742562  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:21.742570  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:21.742585  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:21 GMT
	I1128 03:17:21.742595  356731 round_trippers.go:580]     Audit-Id: f6552477-0d2b-4149-bdda-1695287c7a2b
	I1128 03:17:21.742607  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:21.742877  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-112998","namespace":"kube-system","uid":"9c108920-a3e5-4377-96a3-97a4538555a0","resourceVersion":"883","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8aad7d6fb2125381c02e5fd8434005a3","kubernetes.io/config.mirror":"8aad7d6fb2125381c02e5fd8434005a3","kubernetes.io/config.seen":"2023-11-28T03:04:44.384314206Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1128 03:17:21.743219  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:17:21.743232  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:21.743240  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:21.743249  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:21.745763  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:17:21.745778  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:21.745786  356731 round_trippers.go:580]     Audit-Id: 24d78ca7-1be2-4bb6-ab7f-89472490b68f
	I1128 03:17:21.745794  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:21.745803  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:21.745818  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:21.745827  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:21.745839  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:21 GMT
	I1128 03:17:21.746191  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"911","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1128 03:17:21.746459  356731 pod_ready.go:92] pod "kube-controller-manager-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:17:21.746475  356731 pod_ready.go:81] duration metric: took 6.504812ms waiting for pod "kube-controller-manager-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:21.746486  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bm5x4" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:21.902951  356731 request.go:629] Waited for 156.374365ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bm5x4
	I1128 03:17:21.903052  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bm5x4
	I1128 03:17:21.903061  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:21.903076  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:21.903088  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:21.906636  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:17:21.906665  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:21.906676  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:21.906685  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:21.906693  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:21.906700  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:21 GMT
	I1128 03:17:21.906713  356731 round_trippers.go:580]     Audit-Id: 2c68081f-bdff-472b-9004-34b55df994df
	I1128 03:17:21.906721  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:21.906968  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bm5x4","generateName":"kube-proxy-","namespace":"kube-system","uid":"c478a3ff-3c8e-4f10-88c1-2b6f62b1699d","resourceVersion":"730","creationTimestamp":"2023-11-28T03:06:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:06:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I1128 03:17:22.102898  356731 request.go:629] Waited for 195.391512ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m03
	I1128 03:17:22.102986  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m03
	I1128 03:17:22.102992  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:22.102999  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:22.103006  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:22.105755  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:17:22.105782  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:22.105793  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:22.105801  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:22.105808  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:22.105816  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:22 GMT
	I1128 03:17:22.105823  356731 round_trippers.go:580]     Audit-Id: 28638d68-0474-4a64-b854-44ea160a9e0c
	I1128 03:17:22.105831  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:22.106039  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m03","uid":"471d28bb-efb4-436f-9b13-4d96112b9f87","resourceVersion":"894","creationTimestamp":"2023-11-28T03:07:24Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:07:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I1128 03:17:22.106408  356731 pod_ready.go:92] pod "kube-proxy-bm5x4" in "kube-system" namespace has status "Ready":"True"
	I1128 03:17:22.106430  356731 pod_ready.go:81] duration metric: took 359.935723ms waiting for pod "kube-proxy-bm5x4" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:22.106444  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bmr6b" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:22.302897  356731 request.go:629] Waited for 196.382975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmr6b
	I1128 03:17:22.302979  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmr6b
	I1128 03:17:22.302985  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:22.302993  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:22.303000  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:22.305634  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:17:22.305663  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:22.305673  356731 round_trippers.go:580]     Audit-Id: f0461ea9-a440-41f0-8779-fd5e471722ea
	I1128 03:17:22.305682  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:22.305690  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:22.305698  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:22.305706  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:22.305713  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:22 GMT
	I1128 03:17:22.305888  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bmr6b","generateName":"kube-proxy-","namespace":"kube-system","uid":"0d9b86f2-025d-424d-a66f-ad3255685aca","resourceVersion":"860","creationTimestamp":"2023-11-28T03:04:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1128 03:17:22.502817  356731 request.go:629] Waited for 196.391911ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:17:22.502888  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:17:22.502893  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:22.502906  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:22.502913  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:22.506242  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:17:22.506271  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:22.506281  356731 round_trippers.go:580]     Audit-Id: 8dcd0875-4180-4ba9-93f2-38429279b755
	I1128 03:17:22.506290  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:22.506301  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:22.506309  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:22.506321  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:22.506332  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:22 GMT
	I1128 03:17:22.506818  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"911","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1128 03:17:22.507169  356731 pod_ready.go:92] pod "kube-proxy-bmr6b" in "kube-system" namespace has status "Ready":"True"
	I1128 03:17:22.507185  356731 pod_ready.go:81] duration metric: took 400.733651ms waiting for pod "kube-proxy-bmr6b" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:22.507194  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jgxjs" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:22.702622  356731 request.go:629] Waited for 195.350327ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgxjs
	I1128 03:17:22.702693  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgxjs
	I1128 03:17:22.702697  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:22.702706  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:22.702712  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:22.705331  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:17:22.705358  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:22.705367  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:22.705375  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:22.705383  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:22.705390  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:22 GMT
	I1128 03:17:22.705398  356731 round_trippers.go:580]     Audit-Id: 7271a22f-cfa2-4bf9-8055-8a037f58c7ae
	I1128 03:17:22.705406  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:22.705547  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jgxjs","generateName":"kube-proxy-","namespace":"kube-system","uid":"d8ea73b8-f8e1-4e14-b9cd-4da515a90b3d","resourceVersion":"1063","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I1128 03:17:22.902390  356731 request.go:629] Waited for 196.388138ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:17:22.902462  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:17:22.902467  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:22.902475  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:22.902482  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:22.905426  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:17:22.905447  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:22.905454  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:22.905466  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:22.905473  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:22 GMT
	I1128 03:17:22.905483  356731 round_trippers.go:580]     Audit-Id: fd23e281-3a9f-4510-a8a8-65b7da9fea2f
	I1128 03:17:22.905495  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:22.905506  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:22.905698  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"25a285c1-84a3-4258-9cf7-d6faf52fd6b2","resourceVersion":"1045","creationTimestamp":"2023-11-28T03:17:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:17:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:17:20Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1128 03:17:22.905958  356731 pod_ready.go:92] pod "kube-proxy-jgxjs" in "kube-system" namespace has status "Ready":"True"
	I1128 03:17:22.905973  356731 pod_ready.go:81] duration metric: took 398.773389ms waiting for pod "kube-proxy-jgxjs" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:22.905982  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:23.102418  356731 request.go:629] Waited for 196.363088ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-112998
	I1128 03:17:23.102497  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-112998
	I1128 03:17:23.102504  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:23.102512  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:23.102522  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:23.105764  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:17:23.105782  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:23.105789  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:23 GMT
	I1128 03:17:23.105794  356731 round_trippers.go:580]     Audit-Id: 19c76616-b0cd-452c-952f-d47824ad9f5c
	I1128 03:17:23.105800  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:23.105809  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:23.105818  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:23.105829  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:23.106329  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-112998","namespace":"kube-system","uid":"b32dbcd4-76a8-4b87-b7d8-701f78a8285f","resourceVersion":"875","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"49372038efccb5b42d91203468562dfb","kubernetes.io/config.mirror":"49372038efccb5b42d91203468562dfb","kubernetes.io/config.seen":"2023-11-28T03:04:44.384315431Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1128 03:17:23.301998  356731 request.go:629] Waited for 195.293659ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:17:23.302083  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:17:23.302088  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:23.302097  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:23.302104  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:23.307702  356731 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1128 03:17:23.307723  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:23.307731  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:23.307737  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:23 GMT
	I1128 03:17:23.307742  356731 round_trippers.go:580]     Audit-Id: 4bc9f265-4623-4439-81e9-1fac7afc40f0
	I1128 03:17:23.307747  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:23.307751  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:23.307757  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:23.308053  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"911","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1128 03:17:23.308377  356731 pod_ready.go:92] pod "kube-scheduler-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:17:23.308393  356731 pod_ready.go:81] duration metric: took 402.403874ms waiting for pod "kube-scheduler-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:17:23.308402  356731 pod_ready.go:38] duration metric: took 1.602810519s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 03:17:23.308418  356731 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 03:17:23.308464  356731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 03:17:23.324050  356731 system_svc.go:56] duration metric: took 15.621879ms WaitForService to wait for kubelet.
	I1128 03:17:23.324087  356731 kubeadm.go:581] duration metric: took 1.640266715s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 03:17:23.324121  356731 node_conditions.go:102] verifying NodePressure condition ...
	I1128 03:17:23.502540  356731 request.go:629] Waited for 178.334966ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes
	I1128 03:17:23.502599  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes
	I1128 03:17:23.502609  356731 round_trippers.go:469] Request Headers:
	I1128 03:17:23.502618  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:17:23.502624  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:17:23.505436  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:17:23.505459  356731 round_trippers.go:577] Response Headers:
	I1128 03:17:23.505466  356731 round_trippers.go:580]     Audit-Id: b1cb4fe2-f35f-428f-9668-ff6a56216b78
	I1128 03:17:23.505472  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:17:23.505477  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:17:23.505483  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:17:23.505491  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:17:23.505500  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:17:23 GMT
	I1128 03:17:23.506301  356731 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1066"},"items":[{"metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"911","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15104 chars]
	I1128 03:17:23.507141  356731 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 03:17:23.507167  356731 node_conditions.go:123] node cpu capacity is 2
	I1128 03:17:23.507182  356731 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 03:17:23.507189  356731 node_conditions.go:123] node cpu capacity is 2
	I1128 03:17:23.507195  356731 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 03:17:23.507201  356731 node_conditions.go:123] node cpu capacity is 2
	I1128 03:17:23.507207  356731 node_conditions.go:105] duration metric: took 183.07968ms to run NodePressure ...
	I1128 03:17:23.507225  356731 start.go:228] waiting for startup goroutines ...
	I1128 03:17:23.507257  356731 start.go:242] writing updated cluster config ...
	I1128 03:17:23.507846  356731 config.go:182] Loaded profile config "multinode-112998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:17:23.507977  356731 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/config.json ...
	I1128 03:17:23.511052  356731 out.go:177] * Starting worker node multinode-112998-m03 in cluster multinode-112998
	I1128 03:17:23.512345  356731 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 03:17:23.512369  356731 cache.go:56] Caching tarball of preloaded images
	I1128 03:17:23.512449  356731 preload.go:174] Found /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 03:17:23.512460  356731 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 03:17:23.512549  356731 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/config.json ...
	I1128 03:17:23.512704  356731 start.go:365] acquiring machines lock for multinode-112998-m03: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 03:17:23.512749  356731 start.go:369] acquired machines lock for "multinode-112998-m03" in 26.462µs
	I1128 03:17:23.512762  356731 start.go:96] Skipping create...Using existing machine configuration
	I1128 03:17:23.512769  356731 fix.go:54] fixHost starting: m03
	I1128 03:17:23.513053  356731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:17:23.513075  356731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:17:23.527367  356731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35639
	I1128 03:17:23.527810  356731 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:17:23.528297  356731 main.go:141] libmachine: Using API Version  1
	I1128 03:17:23.528321  356731 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:17:23.528659  356731 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:17:23.528908  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .DriverName
	I1128 03:17:23.529069  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetState
	I1128 03:17:23.530680  356731 fix.go:102] recreateIfNeeded on multinode-112998-m03: state=Running err=<nil>
	W1128 03:17:23.530717  356731 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 03:17:23.532536  356731 out.go:177] * Updating the running kvm2 "multinode-112998-m03" VM ...
	I1128 03:17:23.533970  356731 machine.go:88] provisioning docker machine ...
	I1128 03:17:23.533992  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .DriverName
	I1128 03:17:23.534284  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetMachineName
	I1128 03:17:23.534461  356731 buildroot.go:166] provisioning hostname "multinode-112998-m03"
	I1128 03:17:23.534483  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetMachineName
	I1128 03:17:23.534669  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHHostname
	I1128 03:17:23.537119  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:17:23.537610  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:f7:b4", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:07:18 +0000 UTC Type:0 Mac:52:54:00:c6:f7:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:multinode-112998-m03 Clientid:01:52:54:00:c6:f7:b4}
	I1128 03:17:23.537641  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined IP address 192.168.39.192 and MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:17:23.537893  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHPort
	I1128 03:17:23.538072  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHKeyPath
	I1128 03:17:23.538239  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHKeyPath
	I1128 03:17:23.538390  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHUsername
	I1128 03:17:23.538566  356731 main.go:141] libmachine: Using SSH client type: native
	I1128 03:17:23.538906  356731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I1128 03:17:23.538928  356731 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-112998-m03 && echo "multinode-112998-m03" | sudo tee /etc/hostname
	I1128 03:17:23.681847  356731 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-112998-m03
	
	I1128 03:17:23.681904  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHHostname
	I1128 03:17:23.684775  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:17:23.685159  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:f7:b4", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:07:18 +0000 UTC Type:0 Mac:52:54:00:c6:f7:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:multinode-112998-m03 Clientid:01:52:54:00:c6:f7:b4}
	I1128 03:17:23.685197  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined IP address 192.168.39.192 and MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:17:23.685364  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHPort
	I1128 03:17:23.685580  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHKeyPath
	I1128 03:17:23.685729  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHKeyPath
	I1128 03:17:23.685843  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHUsername
	I1128 03:17:23.686015  356731 main.go:141] libmachine: Using SSH client type: native
	I1128 03:17:23.686379  356731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I1128 03:17:23.686401  356731 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-112998-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-112998-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-112998-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 03:17:23.813777  356731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 03:17:23.813816  356731 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 03:17:23.813840  356731 buildroot.go:174] setting up certificates
	I1128 03:17:23.813855  356731 provision.go:83] configureAuth start
	I1128 03:17:23.813870  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetMachineName
	I1128 03:17:23.814179  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetIP
	I1128 03:17:23.817213  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:17:23.817625  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:f7:b4", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:07:18 +0000 UTC Type:0 Mac:52:54:00:c6:f7:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:multinode-112998-m03 Clientid:01:52:54:00:c6:f7:b4}
	I1128 03:17:23.817660  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined IP address 192.168.39.192 and MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:17:23.817782  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHHostname
	I1128 03:17:23.820244  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:17:23.820556  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:f7:b4", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:07:18 +0000 UTC Type:0 Mac:52:54:00:c6:f7:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:multinode-112998-m03 Clientid:01:52:54:00:c6:f7:b4}
	I1128 03:17:23.820608  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined IP address 192.168.39.192 and MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:17:23.820706  356731 provision.go:138] copyHostCerts
	I1128 03:17:23.820737  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 03:17:23.820768  356731 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 03:17:23.820777  356731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 03:17:23.820846  356731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 03:17:23.820951  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 03:17:23.820975  356731 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 03:17:23.820979  356731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 03:17:23.821011  356731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 03:17:23.821059  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 03:17:23.821075  356731 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 03:17:23.821082  356731 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 03:17:23.821101  356731 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 03:17:23.821144  356731 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.multinode-112998-m03 san=[192.168.39.192 192.168.39.192 localhost 127.0.0.1 minikube multinode-112998-m03]
	I1128 03:17:23.927614  356731 provision.go:172] copyRemoteCerts
	I1128 03:17:23.927681  356731 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 03:17:23.927709  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHHostname
	I1128 03:17:23.930753  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:17:23.931157  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:f7:b4", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:07:18 +0000 UTC Type:0 Mac:52:54:00:c6:f7:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:multinode-112998-m03 Clientid:01:52:54:00:c6:f7:b4}
	I1128 03:17:23.931207  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined IP address 192.168.39.192 and MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:17:23.931456  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHPort
	I1128 03:17:23.931690  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHKeyPath
	I1128 03:17:23.931859  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHUsername
	I1128 03:17:23.932017  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m03/id_rsa Username:docker}
	I1128 03:17:24.026961  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1128 03:17:24.027041  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 03:17:24.052786  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1128 03:17:24.052863  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1128 03:17:24.075137  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1128 03:17:24.075216  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 03:17:24.100898  356731 provision.go:86] duration metric: configureAuth took 287.011095ms
	I1128 03:17:24.100930  356731 buildroot.go:189] setting minikube options for container-runtime
	I1128 03:17:24.101194  356731 config.go:182] Loaded profile config "multinode-112998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:17:24.101302  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHHostname
	I1128 03:17:24.104126  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:17:24.104614  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:f7:b4", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:07:18 +0000 UTC Type:0 Mac:52:54:00:c6:f7:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:multinode-112998-m03 Clientid:01:52:54:00:c6:f7:b4}
	I1128 03:17:24.104647  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined IP address 192.168.39.192 and MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:17:24.104809  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHPort
	I1128 03:17:24.105035  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHKeyPath
	I1128 03:17:24.105200  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHKeyPath
	I1128 03:17:24.105343  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHUsername
	I1128 03:17:24.105489  356731 main.go:141] libmachine: Using SSH client type: native
	I1128 03:17:24.105799  356731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I1128 03:17:24.105815  356731 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 03:18:54.709387  356731 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 03:18:54.709431  356731 machine.go:91] provisioned docker machine in 1m31.175443586s
	I1128 03:18:54.709444  356731 start.go:300] post-start starting for "multinode-112998-m03" (driver="kvm2")
	I1128 03:18:54.709475  356731 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 03:18:54.709506  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .DriverName
	I1128 03:18:54.709950  356731 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 03:18:54.709995  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHHostname
	I1128 03:18:54.712833  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:18:54.713166  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:f7:b4", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:07:18 +0000 UTC Type:0 Mac:52:54:00:c6:f7:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:multinode-112998-m03 Clientid:01:52:54:00:c6:f7:b4}
	I1128 03:18:54.713198  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined IP address 192.168.39.192 and MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:18:54.713356  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHPort
	I1128 03:18:54.713567  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHKeyPath
	I1128 03:18:54.713756  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHUsername
	I1128 03:18:54.713937  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m03/id_rsa Username:docker}
	I1128 03:18:54.815205  356731 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 03:18:54.819879  356731 command_runner.go:130] > NAME=Buildroot
	I1128 03:18:54.819908  356731 command_runner.go:130] > VERSION=2021.02.12-1-g21ec34a-dirty
	I1128 03:18:54.819915  356731 command_runner.go:130] > ID=buildroot
	I1128 03:18:54.819925  356731 command_runner.go:130] > VERSION_ID=2021.02.12
	I1128 03:18:54.819933  356731 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I1128 03:18:54.820015  356731 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 03:18:54.820048  356731 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 03:18:54.820134  356731 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 03:18:54.820232  356731 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 03:18:54.820246  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> /etc/ssl/certs/3405152.pem
	I1128 03:18:54.820324  356731 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 03:18:54.828980  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 03:18:54.852754  356731 start.go:303] post-start completed in 143.295209ms
	I1128 03:18:54.852779  356731 fix.go:56] fixHost completed within 1m31.340010373s
	I1128 03:18:54.852806  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHHostname
	I1128 03:18:54.855209  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:18:54.855501  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:f7:b4", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:07:18 +0000 UTC Type:0 Mac:52:54:00:c6:f7:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:multinode-112998-m03 Clientid:01:52:54:00:c6:f7:b4}
	I1128 03:18:54.855539  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined IP address 192.168.39.192 and MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:18:54.855660  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHPort
	I1128 03:18:54.855891  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHKeyPath
	I1128 03:18:54.856036  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHKeyPath
	I1128 03:18:54.856162  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHUsername
	I1128 03:18:54.856303  356731 main.go:141] libmachine: Using SSH client type: native
	I1128 03:18:54.856694  356731 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.39.192 22 <nil> <nil>}
	I1128 03:18:54.856709  356731 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 03:18:54.985937  356731 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701141534.977804576
	
	I1128 03:18:54.985967  356731 fix.go:206] guest clock: 1701141534.977804576
	I1128 03:18:54.985975  356731 fix.go:219] Guest: 2023-11-28 03:18:54.977804576 +0000 UTC Remote: 2023-11-28 03:18:54.85278422 +0000 UTC m=+558.477292008 (delta=125.020356ms)
	I1128 03:18:54.985994  356731 fix.go:190] guest clock delta is within tolerance: 125.020356ms
	I1128 03:18:54.986000  356731 start.go:83] releasing machines lock for "multinode-112998-m03", held for 1m31.473242363s
	I1128 03:18:54.986020  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .DriverName
	I1128 03:18:54.986321  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetIP
	I1128 03:18:54.989058  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:18:54.989404  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:f7:b4", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:07:18 +0000 UTC Type:0 Mac:52:54:00:c6:f7:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:multinode-112998-m03 Clientid:01:52:54:00:c6:f7:b4}
	I1128 03:18:54.989440  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined IP address 192.168.39.192 and MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:18:54.991646  356731 out.go:177] * Found network options:
	I1128 03:18:54.993323  356731 out.go:177]   - NO_PROXY=192.168.39.73,192.168.39.31
	W1128 03:18:54.994868  356731 proxy.go:119] fail to check proxy env: Error ip not in block
	W1128 03:18:54.994887  356731 proxy.go:119] fail to check proxy env: Error ip not in block
	I1128 03:18:54.994902  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .DriverName
	I1128 03:18:54.995496  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .DriverName
	I1128 03:18:54.995662  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .DriverName
	I1128 03:18:54.995743  356731 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 03:18:54.995791  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHHostname
	W1128 03:18:54.995876  356731 proxy.go:119] fail to check proxy env: Error ip not in block
	W1128 03:18:54.995906  356731 proxy.go:119] fail to check proxy env: Error ip not in block
	I1128 03:18:54.995975  356731 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 03:18:54.996002  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHHostname
	I1128 03:18:54.998669  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:18:54.998848  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:18:54.999047  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:f7:b4", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:07:18 +0000 UTC Type:0 Mac:52:54:00:c6:f7:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:multinode-112998-m03 Clientid:01:52:54:00:c6:f7:b4}
	I1128 03:18:54.999090  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined IP address 192.168.39.192 and MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:18:54.999195  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHPort
	I1128 03:18:54.999324  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:f7:b4", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:07:18 +0000 UTC Type:0 Mac:52:54:00:c6:f7:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:multinode-112998-m03 Clientid:01:52:54:00:c6:f7:b4}
	I1128 03:18:54.999342  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined IP address 192.168.39.192 and MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:18:54.999369  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHKeyPath
	I1128 03:18:54.999516  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHPort
	I1128 03:18:54.999576  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHUsername
	I1128 03:18:54.999658  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHKeyPath
	I1128 03:18:54.999720  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m03/id_rsa Username:docker}
	I1128 03:18:54.999827  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetSSHUsername
	I1128 03:18:54.999928  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.192 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m03/id_rsa Username:docker}
	I1128 03:18:55.248302  356731 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1128 03:18:55.248406  356731 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1128 03:18:55.279446  356731 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1128 03:18:55.280670  356731 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 03:18:55.280753  356731 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 03:18:55.297004  356731 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1128 03:18:55.297057  356731 start.go:472] detecting cgroup driver to use...
	I1128 03:18:55.297134  356731 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 03:18:55.339151  356731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 03:18:55.351859  356731 docker.go:203] disabling cri-docker service (if available) ...
	I1128 03:18:55.351922  356731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 03:18:55.366365  356731 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 03:18:55.379813  356731 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 03:18:55.518870  356731 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 03:18:55.663855  356731 docker.go:219] disabling docker service ...
	I1128 03:18:55.663933  356731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 03:18:55.686382  356731 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 03:18:55.700963  356731 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 03:18:55.853185  356731 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 03:18:56.005917  356731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 03:18:56.019140  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 03:18:56.038470  356731 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1128 03:18:56.038513  356731 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 03:18:56.038574  356731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:18:56.049777  356731 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 03:18:56.049853  356731 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:18:56.060502  356731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:18:56.071260  356731 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:18:56.085342  356731 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 03:18:56.095852  356731 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 03:18:56.105018  356731 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1128 03:18:56.105119  356731 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 03:18:56.113998  356731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 03:18:56.265014  356731 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 03:19:05.347636  356731 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.082572335s)
	I1128 03:19:05.347675  356731 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 03:19:05.347723  356731 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 03:19:05.353437  356731 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1128 03:19:05.353464  356731 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1128 03:19:05.353474  356731 command_runner.go:130] > Device: 16h/22d	Inode: 1200        Links: 1
	I1128 03:19:05.353483  356731 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 03:19:05.353491  356731 command_runner.go:130] > Access: 2023-11-28 03:19:05.255301454 +0000
	I1128 03:19:05.353504  356731 command_runner.go:130] > Modify: 2023-11-28 03:19:05.255301454 +0000
	I1128 03:19:05.353513  356731 command_runner.go:130] > Change: 2023-11-28 03:19:05.255301454 +0000
	I1128 03:19:05.353534  356731 command_runner.go:130] >  Birth: -
	I1128 03:19:05.353776  356731 start.go:540] Will wait 60s for crictl version
	I1128 03:19:05.353831  356731 ssh_runner.go:195] Run: which crictl
	I1128 03:19:05.357636  356731 command_runner.go:130] > /usr/bin/crictl
	I1128 03:19:05.357690  356731 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 03:19:05.399049  356731 command_runner.go:130] > Version:  0.1.0
	I1128 03:19:05.399073  356731 command_runner.go:130] > RuntimeName:  cri-o
	I1128 03:19:05.399077  356731 command_runner.go:130] > RuntimeVersion:  1.24.1
	I1128 03:19:05.399088  356731 command_runner.go:130] > RuntimeApiVersion:  v1
	I1128 03:19:05.399165  356731 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 03:19:05.399260  356731 ssh_runner.go:195] Run: crio --version
	I1128 03:19:05.447954  356731 command_runner.go:130] > crio version 1.24.1
	I1128 03:19:05.447989  356731 command_runner.go:130] > Version:          1.24.1
	I1128 03:19:05.447997  356731 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 03:19:05.448001  356731 command_runner.go:130] > GitTreeState:     dirty
	I1128 03:19:05.448007  356731 command_runner.go:130] > BuildDate:        2023-11-16T19:10:07Z
	I1128 03:19:05.448012  356731 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 03:19:05.448016  356731 command_runner.go:130] > Compiler:         gc
	I1128 03:19:05.448023  356731 command_runner.go:130] > Platform:         linux/amd64
	I1128 03:19:05.448029  356731 command_runner.go:130] > Linkmode:         dynamic
	I1128 03:19:05.448036  356731 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 03:19:05.448040  356731 command_runner.go:130] > SeccompEnabled:   true
	I1128 03:19:05.448044  356731 command_runner.go:130] > AppArmorEnabled:  false
	I1128 03:19:05.449494  356731 ssh_runner.go:195] Run: crio --version
	I1128 03:19:05.505032  356731 command_runner.go:130] > crio version 1.24.1
	I1128 03:19:05.505065  356731 command_runner.go:130] > Version:          1.24.1
	I1128 03:19:05.505077  356731 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I1128 03:19:05.505084  356731 command_runner.go:130] > GitTreeState:     dirty
	I1128 03:19:05.505094  356731 command_runner.go:130] > BuildDate:        2023-11-16T19:10:07Z
	I1128 03:19:05.505102  356731 command_runner.go:130] > GoVersion:        go1.19.9
	I1128 03:19:05.505108  356731 command_runner.go:130] > Compiler:         gc
	I1128 03:19:05.505114  356731 command_runner.go:130] > Platform:         linux/amd64
	I1128 03:19:05.505123  356731 command_runner.go:130] > Linkmode:         dynamic
	I1128 03:19:05.505134  356731 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I1128 03:19:05.505145  356731 command_runner.go:130] > SeccompEnabled:   true
	I1128 03:19:05.505152  356731 command_runner.go:130] > AppArmorEnabled:  false
	I1128 03:19:05.507267  356731 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 03:19:05.508910  356731 out.go:177]   - env NO_PROXY=192.168.39.73
	I1128 03:19:05.510188  356731 out.go:177]   - env NO_PROXY=192.168.39.73,192.168.39.31
	I1128 03:19:05.511786  356731 main.go:141] libmachine: (multinode-112998-m03) Calling .GetIP
	I1128 03:19:05.514331  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:19:05.514672  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c6:f7:b4", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:07:18 +0000 UTC Type:0 Mac:52:54:00:c6:f7:b4 Iaid: IPaddr:192.168.39.192 Prefix:24 Hostname:multinode-112998-m03 Clientid:01:52:54:00:c6:f7:b4}
	I1128 03:19:05.514702  356731 main.go:141] libmachine: (multinode-112998-m03) DBG | domain multinode-112998-m03 has defined IP address 192.168.39.192 and MAC address 52:54:00:c6:f7:b4 in network mk-multinode-112998
	I1128 03:19:05.514884  356731 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1128 03:19:05.518962  356731 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1128 03:19:05.519385  356731 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998 for IP: 192.168.39.192
	I1128 03:19:05.519415  356731 certs.go:190] acquiring lock for shared ca certs: {Name:mk57c0483467fb0022a439f1b546194ca653d1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 03:19:05.519584  356731 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key
	I1128 03:19:05.519633  356731 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key
	I1128 03:19:05.519650  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1128 03:19:05.519671  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1128 03:19:05.519692  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1128 03:19:05.519709  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1128 03:19:05.519785  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem (1338 bytes)
	W1128 03:19:05.519828  356731 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515_empty.pem, impossibly tiny 0 bytes
	I1128 03:19:05.519842  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 03:19:05.519876  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem (1078 bytes)
	I1128 03:19:05.519907  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem (1123 bytes)
	I1128 03:19:05.519937  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem (1675 bytes)
	I1128 03:19:05.519998  356731 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 03:19:05.520026  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem -> /usr/share/ca-certificates/340515.pem
	I1128 03:19:05.520038  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> /usr/share/ca-certificates/3405152.pem
	I1128 03:19:05.520050  356731 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:19:05.520494  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 03:19:05.544981  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 03:19:05.569957  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 03:19:05.593361  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 03:19:05.619965  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem --> /usr/share/ca-certificates/340515.pem (1338 bytes)
	I1128 03:19:05.644165  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /usr/share/ca-certificates/3405152.pem (1708 bytes)
	I1128 03:19:05.667973  356731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 03:19:05.691305  356731 ssh_runner.go:195] Run: openssl version
	I1128 03:19:05.696640  356731 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I1128 03:19:05.696923  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3405152.pem && ln -fs /usr/share/ca-certificates/3405152.pem /etc/ssl/certs/3405152.pem"
	I1128 03:19:05.706856  356731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3405152.pem
	I1128 03:19:05.711505  356731 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 03:19:05.711561  356731 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 03:19:05.711609  356731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3405152.pem
	I1128 03:19:05.716783  356731 command_runner.go:130] > 3ec20f2e
	I1128 03:19:05.717007  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3405152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 03:19:05.724790  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 03:19:05.733971  356731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:19:05.738284  356731 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:19:05.738346  356731 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:19:05.738403  356731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 03:19:05.743949  356731 command_runner.go:130] > b5213941
	I1128 03:19:05.744265  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 03:19:05.752194  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340515.pem && ln -fs /usr/share/ca-certificates/340515.pem /etc/ssl/certs/340515.pem"
	I1128 03:19:05.761385  356731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340515.pem
	I1128 03:19:05.766022  356731 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 03:19:05.766060  356731 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 03:19:05.766099  356731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340515.pem
	I1128 03:19:05.771846  356731 command_runner.go:130] > 51391683
	I1128 03:19:05.771911  356731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340515.pem /etc/ssl/certs/51391683.0"
	I1128 03:19:05.779946  356731 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 03:19:05.784613  356731 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 03:19:05.784818  356731 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1128 03:19:05.784928  356731 ssh_runner.go:195] Run: crio config
	I1128 03:19:05.839929  356731 command_runner.go:130] ! time="2023-11-28 03:19:05.831818798Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I1128 03:19:05.840004  356731 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1128 03:19:05.847658  356731 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1128 03:19:05.847693  356731 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1128 03:19:05.847704  356731 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1128 03:19:05.847707  356731 command_runner.go:130] > #
	I1128 03:19:05.847718  356731 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1128 03:19:05.847729  356731 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1128 03:19:05.847740  356731 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1128 03:19:05.847754  356731 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1128 03:19:05.847764  356731 command_runner.go:130] > # reload'.
	I1128 03:19:05.847775  356731 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1128 03:19:05.847797  356731 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1128 03:19:05.847812  356731 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1128 03:19:05.847836  356731 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1128 03:19:05.847844  356731 command_runner.go:130] > [crio]
	I1128 03:19:05.847856  356731 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1128 03:19:05.847861  356731 command_runner.go:130] > # containers images, in this directory.
	I1128 03:19:05.847870  356731 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1128 03:19:05.847880  356731 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1128 03:19:05.847888  356731 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1128 03:19:05.847894  356731 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1128 03:19:05.847903  356731 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1128 03:19:05.847908  356731 command_runner.go:130] > storage_driver = "overlay"
	I1128 03:19:05.847914  356731 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1128 03:19:05.847922  356731 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1128 03:19:05.847927  356731 command_runner.go:130] > storage_option = [
	I1128 03:19:05.847933  356731 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1128 03:19:05.847937  356731 command_runner.go:130] > ]
	I1128 03:19:05.847946  356731 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1128 03:19:05.847952  356731 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1128 03:19:05.847957  356731 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1128 03:19:05.847967  356731 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1128 03:19:05.847975  356731 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1128 03:19:05.847982  356731 command_runner.go:130] > # always happen on a node reboot
	I1128 03:19:05.847987  356731 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1128 03:19:05.847995  356731 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1128 03:19:05.848001  356731 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1128 03:19:05.848011  356731 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1128 03:19:05.848016  356731 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I1128 03:19:05.848023  356731 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1128 03:19:05.848033  356731 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1128 03:19:05.848040  356731 command_runner.go:130] > # internal_wipe = true
	I1128 03:19:05.848045  356731 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1128 03:19:05.848054  356731 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1128 03:19:05.848060  356731 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1128 03:19:05.848072  356731 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1128 03:19:05.848092  356731 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1128 03:19:05.848099  356731 command_runner.go:130] > [crio.api]
	I1128 03:19:05.848105  356731 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1128 03:19:05.848112  356731 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1128 03:19:05.848118  356731 command_runner.go:130] > # IP address on which the stream server will listen.
	I1128 03:19:05.848127  356731 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1128 03:19:05.848134  356731 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1128 03:19:05.848141  356731 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1128 03:19:05.848148  356731 command_runner.go:130] > # stream_port = "0"
	I1128 03:19:05.848153  356731 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1128 03:19:05.848160  356731 command_runner.go:130] > # stream_enable_tls = false
	I1128 03:19:05.848166  356731 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1128 03:19:05.848173  356731 command_runner.go:130] > # stream_idle_timeout = ""
	I1128 03:19:05.848180  356731 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1128 03:19:05.848188  356731 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1128 03:19:05.848194  356731 command_runner.go:130] > # minutes.
	I1128 03:19:05.848198  356731 command_runner.go:130] > # stream_tls_cert = ""
	I1128 03:19:05.848206  356731 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1128 03:19:05.848215  356731 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1128 03:19:05.848221  356731 command_runner.go:130] > # stream_tls_key = ""
	I1128 03:19:05.848227  356731 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1128 03:19:05.848239  356731 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1128 03:19:05.848247  356731 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1128 03:19:05.848252  356731 command_runner.go:130] > # stream_tls_ca = ""
	I1128 03:19:05.848259  356731 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 03:19:05.848266  356731 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1128 03:19:05.848273  356731 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I1128 03:19:05.848279  356731 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1128 03:19:05.848296  356731 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1128 03:19:05.848308  356731 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1128 03:19:05.848311  356731 command_runner.go:130] > [crio.runtime]
	I1128 03:19:05.848317  356731 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1128 03:19:05.848323  356731 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1128 03:19:05.848329  356731 command_runner.go:130] > # "nofile=1024:2048"
	I1128 03:19:05.848336  356731 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1128 03:19:05.848342  356731 command_runner.go:130] > # default_ulimits = [
	I1128 03:19:05.848346  356731 command_runner.go:130] > # ]
	I1128 03:19:05.848354  356731 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1128 03:19:05.848361  356731 command_runner.go:130] > # no_pivot = false
	I1128 03:19:05.848367  356731 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1128 03:19:05.848376  356731 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1128 03:19:05.848383  356731 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1128 03:19:05.848394  356731 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1128 03:19:05.848401  356731 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1128 03:19:05.848408  356731 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 03:19:05.848415  356731 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1128 03:19:05.848420  356731 command_runner.go:130] > # Cgroup setting for conmon
	I1128 03:19:05.848429  356731 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1128 03:19:05.848435  356731 command_runner.go:130] > conmon_cgroup = "pod"
	I1128 03:19:05.848442  356731 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1128 03:19:05.848449  356731 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1128 03:19:05.848456  356731 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1128 03:19:05.848462  356731 command_runner.go:130] > conmon_env = [
	I1128 03:19:05.848468  356731 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1128 03:19:05.848474  356731 command_runner.go:130] > ]
	I1128 03:19:05.848480  356731 command_runner.go:130] > # Additional environment variables to set for all the
	I1128 03:19:05.848487  356731 command_runner.go:130] > # containers. These are overridden if set in the
	I1128 03:19:05.848494  356731 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1128 03:19:05.848501  356731 command_runner.go:130] > # default_env = [
	I1128 03:19:05.848504  356731 command_runner.go:130] > # ]
	I1128 03:19:05.848512  356731 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1128 03:19:05.848516  356731 command_runner.go:130] > # selinux = false
	I1128 03:19:05.848524  356731 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1128 03:19:05.848537  356731 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1128 03:19:05.848543  356731 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1128 03:19:05.848549  356731 command_runner.go:130] > # seccomp_profile = ""
	I1128 03:19:05.848555  356731 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1128 03:19:05.848563  356731 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1128 03:19:05.848569  356731 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1128 03:19:05.848577  356731 command_runner.go:130] > # which might increase security.
	I1128 03:19:05.848582  356731 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1128 03:19:05.848591  356731 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1128 03:19:05.848598  356731 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1128 03:19:05.848606  356731 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1128 03:19:05.848613  356731 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1128 03:19:05.848620  356731 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:19:05.848625  356731 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1128 03:19:05.848633  356731 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1128 03:19:05.848638  356731 command_runner.go:130] > # the cgroup blockio controller.
	I1128 03:19:05.848644  356731 command_runner.go:130] > # blockio_config_file = ""
	I1128 03:19:05.848651  356731 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1128 03:19:05.848657  356731 command_runner.go:130] > # irqbalance daemon.
	I1128 03:19:05.848663  356731 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1128 03:19:05.848671  356731 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1128 03:19:05.848678  356731 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:19:05.848683  356731 command_runner.go:130] > # rdt_config_file = ""
	I1128 03:19:05.848690  356731 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1128 03:19:05.848698  356731 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1128 03:19:05.848704  356731 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1128 03:19:05.848710  356731 command_runner.go:130] > # separate_pull_cgroup = ""
	I1128 03:19:05.848717  356731 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1128 03:19:05.848725  356731 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1128 03:19:05.848732  356731 command_runner.go:130] > # will be added.
	I1128 03:19:05.848738  356731 command_runner.go:130] > # default_capabilities = [
	I1128 03:19:05.848744  356731 command_runner.go:130] > # 	"CHOWN",
	I1128 03:19:05.848748  356731 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1128 03:19:05.848754  356731 command_runner.go:130] > # 	"FSETID",
	I1128 03:19:05.848758  356731 command_runner.go:130] > # 	"FOWNER",
	I1128 03:19:05.848764  356731 command_runner.go:130] > # 	"SETGID",
	I1128 03:19:05.848768  356731 command_runner.go:130] > # 	"SETUID",
	I1128 03:19:05.848775  356731 command_runner.go:130] > # 	"SETPCAP",
	I1128 03:19:05.848779  356731 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1128 03:19:05.848785  356731 command_runner.go:130] > # 	"KILL",
	I1128 03:19:05.848788  356731 command_runner.go:130] > # ]
	I1128 03:19:05.848797  356731 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1128 03:19:05.848805  356731 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 03:19:05.848810  356731 command_runner.go:130] > # default_sysctls = [
	I1128 03:19:05.848813  356731 command_runner.go:130] > # ]
	I1128 03:19:05.848820  356731 command_runner.go:130] > # List of devices on the host that a
	I1128 03:19:05.848826  356731 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1128 03:19:05.848833  356731 command_runner.go:130] > # allowed_devices = [
	I1128 03:19:05.848837  356731 command_runner.go:130] > # 	"/dev/fuse",
	I1128 03:19:05.848843  356731 command_runner.go:130] > # ]
	I1128 03:19:05.848848  356731 command_runner.go:130] > # List of additional devices. specified as
	I1128 03:19:05.848857  356731 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1128 03:19:05.848864  356731 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1128 03:19:05.848896  356731 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1128 03:19:05.848906  356731 command_runner.go:130] > # additional_devices = [
	I1128 03:19:05.848912  356731 command_runner.go:130] > # ]
	I1128 03:19:05.848920  356731 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1128 03:19:05.848926  356731 command_runner.go:130] > # cdi_spec_dirs = [
	I1128 03:19:05.848931  356731 command_runner.go:130] > # 	"/etc/cdi",
	I1128 03:19:05.848937  356731 command_runner.go:130] > # 	"/var/run/cdi",
	I1128 03:19:05.848941  356731 command_runner.go:130] > # ]
	I1128 03:19:05.848949  356731 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1128 03:19:05.848956  356731 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1128 03:19:05.848963  356731 command_runner.go:130] > # Defaults to false.
	I1128 03:19:05.848968  356731 command_runner.go:130] > # device_ownership_from_security_context = false
	I1128 03:19:05.848976  356731 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1128 03:19:05.848985  356731 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1128 03:19:05.848992  356731 command_runner.go:130] > # hooks_dir = [
	I1128 03:19:05.848996  356731 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1128 03:19:05.849002  356731 command_runner.go:130] > # ]
	I1128 03:19:05.849008  356731 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1128 03:19:05.849017  356731 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1128 03:19:05.849024  356731 command_runner.go:130] > # its default mounts from the following two files:
	I1128 03:19:05.849027  356731 command_runner.go:130] > #
	I1128 03:19:05.849036  356731 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1128 03:19:05.849044  356731 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1128 03:19:05.849050  356731 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1128 03:19:05.849056  356731 command_runner.go:130] > #
	I1128 03:19:05.849062  356731 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1128 03:19:05.849077  356731 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1128 03:19:05.849085  356731 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1128 03:19:05.849091  356731 command_runner.go:130] > #      only add mounts it finds in this file.
	I1128 03:19:05.849096  356731 command_runner.go:130] > #
	I1128 03:19:05.849100  356731 command_runner.go:130] > # default_mounts_file = ""
	I1128 03:19:05.849108  356731 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1128 03:19:05.849115  356731 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1128 03:19:05.849121  356731 command_runner.go:130] > pids_limit = 1024
	I1128 03:19:05.849128  356731 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1128 03:19:05.849136  356731 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1128 03:19:05.849145  356731 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1128 03:19:05.849153  356731 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1128 03:19:05.849159  356731 command_runner.go:130] > # log_size_max = -1
	I1128 03:19:05.849166  356731 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I1128 03:19:05.849173  356731 command_runner.go:130] > # log_to_journald = false
	I1128 03:19:05.849179  356731 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1128 03:19:05.849186  356731 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1128 03:19:05.849191  356731 command_runner.go:130] > # Path to directory for container attach sockets.
	I1128 03:19:05.849198  356731 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1128 03:19:05.849204  356731 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1128 03:19:05.849210  356731 command_runner.go:130] > # bind_mount_prefix = ""
	I1128 03:19:05.849216  356731 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1128 03:19:05.849223  356731 command_runner.go:130] > # read_only = false
	I1128 03:19:05.849230  356731 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1128 03:19:05.849238  356731 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1128 03:19:05.849244  356731 command_runner.go:130] > # live configuration reload.
	I1128 03:19:05.849249  356731 command_runner.go:130] > # log_level = "info"
	I1128 03:19:05.849256  356731 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1128 03:19:05.849263  356731 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:19:05.849269  356731 command_runner.go:130] > # log_filter = ""
	I1128 03:19:05.849275  356731 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1128 03:19:05.849283  356731 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1128 03:19:05.849288  356731 command_runner.go:130] > # separated by comma.
	I1128 03:19:05.849291  356731 command_runner.go:130] > # uid_mappings = ""
	I1128 03:19:05.849300  356731 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1128 03:19:05.849306  356731 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1128 03:19:05.849313  356731 command_runner.go:130] > # separated by comma.
	I1128 03:19:05.849317  356731 command_runner.go:130] > # gid_mappings = ""
	I1128 03:19:05.849325  356731 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1128 03:19:05.849334  356731 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 03:19:05.849340  356731 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 03:19:05.849347  356731 command_runner.go:130] > # minimum_mappable_uid = -1
	I1128 03:19:05.849353  356731 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1128 03:19:05.849361  356731 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1128 03:19:05.849367  356731 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1128 03:19:05.849374  356731 command_runner.go:130] > # minimum_mappable_gid = -1
	I1128 03:19:05.849380  356731 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1128 03:19:05.849388  356731 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1128 03:19:05.849396  356731 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1128 03:19:05.849400  356731 command_runner.go:130] > # ctr_stop_timeout = 30
	I1128 03:19:05.849408  356731 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1128 03:19:05.849414  356731 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1128 03:19:05.849421  356731 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1128 03:19:05.849426  356731 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1128 03:19:05.849433  356731 command_runner.go:130] > drop_infra_ctr = false
	I1128 03:19:05.849442  356731 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1128 03:19:05.849450  356731 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1128 03:19:05.849459  356731 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1128 03:19:05.849465  356731 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1128 03:19:05.849472  356731 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1128 03:19:05.849479  356731 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1128 03:19:05.849484  356731 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1128 03:19:05.849493  356731 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1128 03:19:05.849501  356731 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1128 03:19:05.849507  356731 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1128 03:19:05.849514  356731 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I1128 03:19:05.849522  356731 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I1128 03:19:05.849527  356731 command_runner.go:130] > # default_runtime = "runc"
	I1128 03:19:05.849532  356731 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1128 03:19:05.849540  356731 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1128 03:19:05.849551  356731 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I1128 03:19:05.849557  356731 command_runner.go:130] > # creation as a file is not desired either.
	I1128 03:19:05.849566  356731 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1128 03:19:05.849573  356731 command_runner.go:130] > # the hostname is being managed dynamically.
	I1128 03:19:05.849578  356731 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1128 03:19:05.849584  356731 command_runner.go:130] > # ]
	I1128 03:19:05.849590  356731 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1128 03:19:05.849599  356731 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1128 03:19:05.849605  356731 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I1128 03:19:05.849614  356731 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I1128 03:19:05.849619  356731 command_runner.go:130] > #
	I1128 03:19:05.849624  356731 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I1128 03:19:05.849631  356731 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I1128 03:19:05.849635  356731 command_runner.go:130] > #  runtime_type = "oci"
	I1128 03:19:05.849642  356731 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I1128 03:19:05.849647  356731 command_runner.go:130] > #  privileged_without_host_devices = false
	I1128 03:19:05.849654  356731 command_runner.go:130] > #  allowed_annotations = []
	I1128 03:19:05.849658  356731 command_runner.go:130] > # Where:
	I1128 03:19:05.849666  356731 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I1128 03:19:05.849672  356731 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I1128 03:19:05.849680  356731 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1128 03:19:05.849694  356731 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1128 03:19:05.849701  356731 command_runner.go:130] > #   in $PATH.
	I1128 03:19:05.849707  356731 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I1128 03:19:05.849715  356731 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1128 03:19:05.849723  356731 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I1128 03:19:05.849729  356731 command_runner.go:130] > #   state.
	I1128 03:19:05.849754  356731 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1128 03:19:05.849768  356731 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1128 03:19:05.849774  356731 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1128 03:19:05.849780  356731 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1128 03:19:05.849788  356731 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1128 03:19:05.849797  356731 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1128 03:19:05.849804  356731 command_runner.go:130] > #   The currently recognized values are:
	I1128 03:19:05.849810  356731 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1128 03:19:05.849819  356731 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1128 03:19:05.849828  356731 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1128 03:19:05.849834  356731 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1128 03:19:05.849844  356731 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1128 03:19:05.849853  356731 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1128 03:19:05.849868  356731 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1128 03:19:05.849877  356731 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I1128 03:19:05.849884  356731 command_runner.go:130] > #   should be moved to the container's cgroup
	I1128 03:19:05.849892  356731 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1128 03:19:05.849898  356731 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1128 03:19:05.849903  356731 command_runner.go:130] > runtime_type = "oci"
	I1128 03:19:05.849909  356731 command_runner.go:130] > runtime_root = "/run/runc"
	I1128 03:19:05.849914  356731 command_runner.go:130] > runtime_config_path = ""
	I1128 03:19:05.849921  356731 command_runner.go:130] > monitor_path = ""
	I1128 03:19:05.849925  356731 command_runner.go:130] > monitor_cgroup = ""
	I1128 03:19:05.849932  356731 command_runner.go:130] > monitor_exec_cgroup = ""
	I1128 03:19:05.849938  356731 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I1128 03:19:05.849944  356731 command_runner.go:130] > # running containers
	I1128 03:19:05.849948  356731 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I1128 03:19:05.849955  356731 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I1128 03:19:05.849983  356731 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I1128 03:19:05.849998  356731 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I1128 03:19:05.850005  356731 command_runner.go:130] > # Kata Containers with the default configured VMM
	I1128 03:19:05.850011  356731 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I1128 03:19:05.850018  356731 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I1128 03:19:05.850023  356731 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I1128 03:19:05.850031  356731 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I1128 03:19:05.850036  356731 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I1128 03:19:05.850045  356731 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1128 03:19:05.850052  356731 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1128 03:19:05.850061  356731 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1128 03:19:05.850074  356731 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1128 03:19:05.850084  356731 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1128 03:19:05.850092  356731 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1128 03:19:05.850104  356731 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1128 03:19:05.850114  356731 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1128 03:19:05.850122  356731 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1128 03:19:05.850130  356731 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1128 03:19:05.850135  356731 command_runner.go:130] > # Example:
	I1128 03:19:05.850143  356731 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1128 03:19:05.850150  356731 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1128 03:19:05.850155  356731 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1128 03:19:05.850164  356731 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1128 03:19:05.850170  356731 command_runner.go:130] > # cpuset = 0
	I1128 03:19:05.850175  356731 command_runner.go:130] > # cpushares = "0-1"
	I1128 03:19:05.850181  356731 command_runner.go:130] > # Where:
	I1128 03:19:05.850187  356731 command_runner.go:130] > # The workload name is workload-type.
	I1128 03:19:05.850197  356731 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1128 03:19:05.850203  356731 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1128 03:19:05.850211  356731 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1128 03:19:05.850221  356731 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1128 03:19:05.850229  356731 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1128 03:19:05.850235  356731 command_runner.go:130] > # 
	I1128 03:19:05.850241  356731 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1128 03:19:05.850246  356731 command_runner.go:130] > #
	I1128 03:19:05.850252  356731 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1128 03:19:05.850260  356731 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1128 03:19:05.850269  356731 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1128 03:19:05.850277  356731 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1128 03:19:05.850286  356731 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1128 03:19:05.850292  356731 command_runner.go:130] > [crio.image]
	I1128 03:19:05.850298  356731 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1128 03:19:05.850305  356731 command_runner.go:130] > # default_transport = "docker://"
	I1128 03:19:05.850312  356731 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1128 03:19:05.850321  356731 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1128 03:19:05.850327  356731 command_runner.go:130] > # global_auth_file = ""
	I1128 03:19:05.850332  356731 command_runner.go:130] > # The image used to instantiate infra containers.
	I1128 03:19:05.850339  356731 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:19:05.850344  356731 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I1128 03:19:05.850353  356731 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1128 03:19:05.850361  356731 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1128 03:19:05.850368  356731 command_runner.go:130] > # This option supports live configuration reload.
	I1128 03:19:05.850372  356731 command_runner.go:130] > # pause_image_auth_file = ""
	I1128 03:19:05.850380  356731 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1128 03:19:05.850389  356731 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1128 03:19:05.850397  356731 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1128 03:19:05.850405  356731 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1128 03:19:05.850410  356731 command_runner.go:130] > # pause_command = "/pause"
	I1128 03:19:05.850418  356731 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1128 03:19:05.850426  356731 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1128 03:19:05.850434  356731 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1128 03:19:05.850440  356731 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1128 03:19:05.850450  356731 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1128 03:19:05.850456  356731 command_runner.go:130] > # signature_policy = ""
	I1128 03:19:05.850462  356731 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1128 03:19:05.850470  356731 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1128 03:19:05.850477  356731 command_runner.go:130] > # changing them here.
	I1128 03:19:05.850481  356731 command_runner.go:130] > # insecure_registries = [
	I1128 03:19:05.850486  356731 command_runner.go:130] > # ]
	I1128 03:19:05.850493  356731 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1128 03:19:05.850501  356731 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1128 03:19:05.850505  356731 command_runner.go:130] > # image_volumes = "mkdir"
	I1128 03:19:05.850513  356731 command_runner.go:130] > # Temporary directory to use for storing big files
	I1128 03:19:05.850517  356731 command_runner.go:130] > # big_files_temporary_dir = ""
	I1128 03:19:05.850525  356731 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1128 03:19:05.850529  356731 command_runner.go:130] > # CNI plugins.
	I1128 03:19:05.850533  356731 command_runner.go:130] > [crio.network]
	I1128 03:19:05.850549  356731 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1128 03:19:05.850558  356731 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1128 03:19:05.850562  356731 command_runner.go:130] > # cni_default_network = ""
	I1128 03:19:05.850570  356731 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1128 03:19:05.850574  356731 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1128 03:19:05.850582  356731 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1128 03:19:05.850586  356731 command_runner.go:130] > # plugin_dirs = [
	I1128 03:19:05.850591  356731 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1128 03:19:05.850594  356731 command_runner.go:130] > # ]
	I1128 03:19:05.850600  356731 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1128 03:19:05.850606  356731 command_runner.go:130] > [crio.metrics]
	I1128 03:19:05.850611  356731 command_runner.go:130] > # Globally enable or disable metrics support.
	I1128 03:19:05.850616  356731 command_runner.go:130] > enable_metrics = true
	I1128 03:19:05.850621  356731 command_runner.go:130] > # Specify enabled metrics collectors.
	I1128 03:19:05.850628  356731 command_runner.go:130] > # Per default all metrics are enabled.
	I1128 03:19:05.850634  356731 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1128 03:19:05.850643  356731 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1128 03:19:05.850649  356731 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1128 03:19:05.850654  356731 command_runner.go:130] > # metrics_collectors = [
	I1128 03:19:05.850658  356731 command_runner.go:130] > # 	"operations",
	I1128 03:19:05.850665  356731 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1128 03:19:05.850670  356731 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1128 03:19:05.850676  356731 command_runner.go:130] > # 	"operations_errors",
	I1128 03:19:05.850680  356731 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1128 03:19:05.850687  356731 command_runner.go:130] > # 	"image_pulls_by_name",
	I1128 03:19:05.850691  356731 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1128 03:19:05.850696  356731 command_runner.go:130] > # 	"image_pulls_failures",
	I1128 03:19:05.850702  356731 command_runner.go:130] > # 	"image_pulls_successes",
	I1128 03:19:05.850707  356731 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1128 03:19:05.850713  356731 command_runner.go:130] > # 	"image_layer_reuse",
	I1128 03:19:05.850717  356731 command_runner.go:130] > # 	"containers_oom_total",
	I1128 03:19:05.850724  356731 command_runner.go:130] > # 	"containers_oom",
	I1128 03:19:05.850728  356731 command_runner.go:130] > # 	"processes_defunct",
	I1128 03:19:05.850734  356731 command_runner.go:130] > # 	"operations_total",
	I1128 03:19:05.850739  356731 command_runner.go:130] > # 	"operations_latency_seconds",
	I1128 03:19:05.850745  356731 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1128 03:19:05.850750  356731 command_runner.go:130] > # 	"operations_errors_total",
	I1128 03:19:05.850757  356731 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1128 03:19:05.850761  356731 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1128 03:19:05.850768  356731 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1128 03:19:05.850772  356731 command_runner.go:130] > # 	"image_pulls_success_total",
	I1128 03:19:05.850779  356731 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1128 03:19:05.850783  356731 command_runner.go:130] > # 	"containers_oom_count_total",
	I1128 03:19:05.850787  356731 command_runner.go:130] > # ]
	I1128 03:19:05.850795  356731 command_runner.go:130] > # The port on which the metrics server will listen.
	I1128 03:19:05.850799  356731 command_runner.go:130] > # metrics_port = 9090
	I1128 03:19:05.850806  356731 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1128 03:19:05.850810  356731 command_runner.go:130] > # metrics_socket = ""
	I1128 03:19:05.850821  356731 command_runner.go:130] > # The certificate for the secure metrics server.
	I1128 03:19:05.850830  356731 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1128 03:19:05.850838  356731 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1128 03:19:05.850845  356731 command_runner.go:130] > # certificate on any modification event.
	I1128 03:19:05.850849  356731 command_runner.go:130] > # metrics_cert = ""
	I1128 03:19:05.850856  356731 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1128 03:19:05.850862  356731 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1128 03:19:05.850869  356731 command_runner.go:130] > # metrics_key = ""
	I1128 03:19:05.850874  356731 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1128 03:19:05.850880  356731 command_runner.go:130] > [crio.tracing]
	I1128 03:19:05.850887  356731 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1128 03:19:05.850893  356731 command_runner.go:130] > # enable_tracing = false
	I1128 03:19:05.850898  356731 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1128 03:19:05.850905  356731 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1128 03:19:05.850910  356731 command_runner.go:130] > # Number of samples to collect per million spans.
	I1128 03:19:05.850917  356731 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1128 03:19:05.850923  356731 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1128 03:19:05.850929  356731 command_runner.go:130] > [crio.stats]
	I1128 03:19:05.850935  356731 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1128 03:19:05.850943  356731 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1128 03:19:05.850947  356731 command_runner.go:130] > # stats_collection_period = 0
	I1128 03:19:05.851022  356731 cni.go:84] Creating CNI manager for ""
	I1128 03:19:05.851033  356731 cni.go:136] 3 nodes found, recommending kindnet
	I1128 03:19:05.851044  356731 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 03:19:05.851066  356731 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.192 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-112998 NodeName:multinode-112998-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.73"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.192 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 03:19:05.851207  356731 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.192
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-112998-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.192
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.73"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 03:19:05.851257  356731 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-112998-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.192
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-112998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 03:19:05.851306  356731 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 03:19:05.861046  356731 command_runner.go:130] > kubeadm
	I1128 03:19:05.861071  356731 command_runner.go:130] > kubectl
	I1128 03:19:05.861078  356731 command_runner.go:130] > kubelet
	I1128 03:19:05.861114  356731 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 03:19:05.861186  356731 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1128 03:19:05.870398  356731 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1128 03:19:05.887702  356731 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 03:19:05.904639  356731 ssh_runner.go:195] Run: grep 192.168.39.73	control-plane.minikube.internal$ /etc/hosts
	I1128 03:19:05.908376  356731 command_runner.go:130] > 192.168.39.73	control-plane.minikube.internal
	I1128 03:19:05.908431  356731 host.go:66] Checking if "multinode-112998" exists ...
	I1128 03:19:05.908734  356731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:19:05.908745  356731 config.go:182] Loaded profile config "multinode-112998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:19:05.908770  356731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:19:05.923698  356731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40319
	I1128 03:19:05.924177  356731 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:19:05.924614  356731 main.go:141] libmachine: Using API Version  1
	I1128 03:19:05.924632  356731 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:19:05.924979  356731 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:19:05.925165  356731 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:19:05.925306  356731 start.go:304] JoinCluster: &{Name:multinode-112998 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-112998 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.73 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.31 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.192 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 03:19:05.925427  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1128 03:19:05.925441  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:19:05.928173  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:19:05.928607  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:19:05.928637  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:19:05.928787  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:19:05.928962  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:19:05.929101  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:19:05.929213  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:19:06.098993  356731 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token dow6c5.x69qu0gd4nwx6zn1 --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 03:19:06.099278  356731 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.192 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1128 03:19:06.099319  356731 host.go:66] Checking if "multinode-112998" exists ...
	I1128 03:19:06.099645  356731 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:19:06.099688  356731 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:19:06.116567  356731 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35401
	I1128 03:19:06.117038  356731 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:19:06.117571  356731 main.go:141] libmachine: Using API Version  1
	I1128 03:19:06.117611  356731 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:19:06.117975  356731 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:19:06.118188  356731 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:19:06.118394  356731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-112998-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1128 03:19:06.118424  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:19:06.121186  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:19:06.121579  356731 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:19:06.121606  356731 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:19:06.121722  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:19:06.121938  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:19:06.122117  356731 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:19:06.122257  356731 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:19:06.272533  356731 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1128 03:19:06.332557  356731 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-587m7, kube-system/kube-proxy-bm5x4
	I1128 03:19:09.348852  356731 command_runner.go:130] > node/multinode-112998-m03 cordoned
	I1128 03:19:09.348910  356731 command_runner.go:130] > pod "busybox-5bc68d56bd-f54s2" has DeletionTimestamp older than 1 seconds, skipping
	I1128 03:19:09.348921  356731 command_runner.go:130] > node/multinode-112998-m03 drained
	I1128 03:19:09.348959  356731 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-112998-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.230531117s)
	I1128 03:19:09.348998  356731 node.go:108] successfully drained node "m03"
	I1128 03:19:09.349528  356731 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:19:09.349858  356731 kapi.go:59] client config for multinode-112998: &rest.Config{Host:"https://192.168.39.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 03:19:09.350393  356731 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1128 03:19:09.350514  356731 round_trippers.go:463] DELETE https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m03
	I1128 03:19:09.350527  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:09.350539  356731 round_trippers.go:473]     Content-Type: application/json
	I1128 03:19:09.350551  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:09.350563  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:09.363502  356731 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1128 03:19:09.363526  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:09.363535  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:09.363543  356731 round_trippers.go:580]     Content-Length: 171
	I1128 03:19:09.363551  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:09 GMT
	I1128 03:19:09.363559  356731 round_trippers.go:580]     Audit-Id: 83fa5de6-cd37-4c5b-bae9-809046d9f2aa
	I1128 03:19:09.363567  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:09.363575  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:09.363585  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:09.363658  356731 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-112998-m03","kind":"nodes","uid":"471d28bb-efb4-436f-9b13-4d96112b9f87"}}
	I1128 03:19:09.363721  356731 node.go:124] successfully deleted node "m03"
	I1128 03:19:09.363735  356731 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.192 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1128 03:19:09.363771  356731 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.192 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1128 03:19:09.363803  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token dow6c5.x69qu0gd4nwx6zn1 --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-112998-m03"
	I1128 03:19:09.422356  356731 command_runner.go:130] ! W1128 03:19:09.414052    2336 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I1128 03:19:09.422419  356731 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1128 03:19:09.569730  356731 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1128 03:19:09.569766  356731 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1128 03:19:10.319462  356731 command_runner.go:130] > [preflight] Running pre-flight checks
	I1128 03:19:10.319498  356731 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1128 03:19:10.319512  356731 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1128 03:19:10.319525  356731 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 03:19:10.319536  356731 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 03:19:10.319544  356731 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1128 03:19:10.319554  356731 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I1128 03:19:10.319561  356731 command_runner.go:130] > This node has joined the cluster:
	I1128 03:19:10.319570  356731 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I1128 03:19:10.319578  356731 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I1128 03:19:10.319588  356731 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I1128 03:19:10.319627  356731 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1128 03:19:10.591186  356731 start.go:306] JoinCluster complete in 4.665879457s
	I1128 03:19:10.591225  356731 cni.go:84] Creating CNI manager for ""
	I1128 03:19:10.591231  356731 cni.go:136] 3 nodes found, recommending kindnet
	I1128 03:19:10.591299  356731 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1128 03:19:10.598073  356731 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1128 03:19:10.598094  356731 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I1128 03:19:10.598102  356731 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I1128 03:19:10.598108  356731 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1128 03:19:10.598114  356731 command_runner.go:130] > Access: 2023-11-28 03:14:47.716335571 +0000
	I1128 03:19:10.598120  356731 command_runner.go:130] > Modify: 2023-11-16 19:19:18.000000000 +0000
	I1128 03:19:10.598126  356731 command_runner.go:130] > Change: 2023-11-28 03:14:45.792335571 +0000
	I1128 03:19:10.598133  356731 command_runner.go:130] >  Birth: -
	I1128 03:19:10.598343  356731 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I1128 03:19:10.598359  356731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I1128 03:19:10.617324  356731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1128 03:19:10.890026  356731 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1128 03:19:10.895075  356731 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1128 03:19:10.903155  356731 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1128 03:19:10.919623  356731 command_runner.go:130] > daemonset.apps/kindnet configured
	I1128 03:19:10.923077  356731 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:19:10.923367  356731 kapi.go:59] client config for multinode-112998: &rest.Config{Host:"https://192.168.39.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 03:19:10.923726  356731 round_trippers.go:463] GET https://192.168.39.73:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1128 03:19:10.923743  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:10.923784  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:10.923797  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:10.926272  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:19:10.926290  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:10.926298  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:10.926304  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:10.926309  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:10.926314  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:10.926320  356731 round_trippers.go:580]     Content-Length: 291
	I1128 03:19:10.926325  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:10 GMT
	I1128 03:19:10.926331  356731 round_trippers.go:580]     Audit-Id: 340eafa7-7c51-418b-915f-7ae77318d52d
	I1128 03:19:10.926361  356731 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"722e10cd-af13-449a-984b-faf3aaa4e33e","resourceVersion":"899","creationTimestamp":"2023-11-28T03:04:44Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1128 03:19:10.926461  356731 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-112998" context rescaled to 1 replicas
	I1128 03:19:10.926495  356731 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.192 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I1128 03:19:10.929032  356731 out.go:177] * Verifying Kubernetes components...
	I1128 03:19:10.930537  356731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 03:19:10.949309  356731 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:19:10.949602  356731 kapi.go:59] client config for multinode-112998: &rest.Config{Host:"https://192.168.39.73:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.crt", KeyFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/profiles/multinode-112998/client.key", CAFile:"/home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), N
extProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c258a0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1128 03:19:10.949852  356731 node_ready.go:35] waiting up to 6m0s for node "multinode-112998-m03" to be "Ready" ...
	I1128 03:19:10.949937  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m03
	I1128 03:19:10.949949  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:10.949963  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:10.949971  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:10.962337  356731 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I1128 03:19:10.962364  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:10.962372  356731 round_trippers.go:580]     Audit-Id: b18640bf-e4ca-4922-8e53-45cade41845f
	I1128 03:19:10.962378  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:10.962383  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:10.962388  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:10.962393  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:10.962399  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:10 GMT
	I1128 03:19:10.962484  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m03","uid":"8f53a58e-a6fa-4925-a7f0-fb016cd54291","resourceVersion":"1225","creationTimestamp":"2023-11-28T03:19:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:19:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:19:10Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1128 03:19:10.962744  356731 node_ready.go:49] node "multinode-112998-m03" has status "Ready":"True"
	I1128 03:19:10.962760  356731 node_ready.go:38] duration metric: took 12.891904ms waiting for node "multinode-112998-m03" to be "Ready" ...
	I1128 03:19:10.962769  356731 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 03:19:10.962833  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods
	I1128 03:19:10.962842  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:10.962849  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:10.962854  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:10.967936  356731 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1128 03:19:10.967957  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:10.967964  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:10.967970  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:10 GMT
	I1128 03:19:10.967977  356731 round_trippers.go:580]     Audit-Id: 9a76ded1-8f8e-4856-b700-339f0ec47d2f
	I1128 03:19:10.967985  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:10.967992  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:10.968000  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:10.969080  356731 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1230"},"items":[{"metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"881","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82197 chars]
	I1128 03:19:10.971794  356731 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:10.971862  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-sd64m
	I1128 03:19:10.971870  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:10.971919  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:10.971931  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:10.974079  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:19:10.974100  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:10.974111  356731 round_trippers.go:580]     Audit-Id: d2601a44-56d1-430c-9c08-2e39e740f03b
	I1128 03:19:10.974120  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:10.974127  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:10.974132  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:10.974137  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:10.974143  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:10 GMT
	I1128 03:19:10.974295  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-sd64m","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"0d5cae9f-6647-42f9-a8e7-1f14dc9fa422","resourceVersion":"881","creationTimestamp":"2023-11-28T03:04:57Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"fa5296ff-a361-4cc5-a9c8-3740662920f0","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"fa5296ff-a361-4cc5-a9c8-3740662920f0\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6264 chars]
	I1128 03:19:10.974692  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:19:10.974705  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:10.974712  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:10.974718  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:10.976554  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:19:10.976574  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:10.976584  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:10.976627  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:10.976640  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:10.976648  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:10.976657  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:10 GMT
	I1128 03:19:10.976668  356731 round_trippers.go:580]     Audit-Id: edcaaed7-5e5c-4b94-b1bc-36e0ca6de159
	I1128 03:19:10.976946  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"911","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1128 03:19:10.977294  356731 pod_ready.go:92] pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace has status "Ready":"True"
	I1128 03:19:10.977312  356731 pod_ready.go:81] duration metric: took 5.497ms waiting for pod "coredns-5dd5756b68-sd64m" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:10.977319  356731 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:10.977362  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-112998
	I1128 03:19:10.977369  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:10.977376  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:10.977382  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:10.979516  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:19:10.979536  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:10.979545  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:10.979554  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:10.979562  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:10 GMT
	I1128 03:19:10.979570  356731 round_trippers.go:580]     Audit-Id: 3b1e5a30-3f7f-47d6-883c-a97cabe524cd
	I1128 03:19:10.979577  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:10.979589  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:10.979920  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-112998","namespace":"kube-system","uid":"d09c5f66-0756-4402-ae0e-3b10c34e059c","resourceVersion":"874","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.73:2379","kubernetes.io/config.hash":"424bc6684b5cae600504832fd6cb287f","kubernetes.io/config.mirror":"424bc6684b5cae600504832fd6cb287f","kubernetes.io/config.seen":"2023-11-28T03:04:44.384307907Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 5843 chars]
	I1128 03:19:10.980252  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:19:10.980266  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:10.980273  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:10.980279  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:10.982384  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:19:10.982404  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:10.982413  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:10 GMT
	I1128 03:19:10.982422  356731 round_trippers.go:580]     Audit-Id: e86b9ba7-a20e-473e-a0de-de3f70ed3635
	I1128 03:19:10.982430  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:10.982438  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:10.982446  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:10.982454  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:10.982605  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"911","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1128 03:19:10.982878  356731 pod_ready.go:92] pod "etcd-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:19:10.982891  356731 pod_ready.go:81] duration metric: took 5.566389ms waiting for pod "etcd-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:10.982904  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:10.982945  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-112998
	I1128 03:19:10.982951  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:10.982958  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:10.982963  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:10.985351  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:19:10.985371  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:10.985380  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:10 GMT
	I1128 03:19:10.985388  356731 round_trippers.go:580]     Audit-Id: 1b2f13c8-3621-424c-9225-426d148e3ef0
	I1128 03:19:10.985397  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:10.985404  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:10.985412  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:10.985419  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:10.985583  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-112998","namespace":"kube-system","uid":"2191c8f0-3de1-4415-9bc9-b5dc50008609","resourceVersion":"901","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.73:8443","kubernetes.io/config.hash":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.mirror":"f38601fa395350043ca26b7c11be4397","kubernetes.io/config.seen":"2023-11-28T03:04:44.384313035Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7380 chars]
	I1128 03:19:10.985913  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:19:10.985924  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:10.985931  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:10.985937  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:10.987765  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:19:10.987802  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:10.987811  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:10.987820  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:10 GMT
	I1128 03:19:10.987828  356731 round_trippers.go:580]     Audit-Id: a48cadc0-91be-493c-9905-4f93410c3fd4
	I1128 03:19:10.987835  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:10.987844  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:10.987855  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:10.988280  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"911","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1128 03:19:10.988612  356731 pod_ready.go:92] pod "kube-apiserver-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:19:10.988628  356731 pod_ready.go:81] duration metric: took 5.717565ms waiting for pod "kube-apiserver-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:10.988640  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:10.988697  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-112998
	I1128 03:19:10.988707  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:10.988718  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:10.988729  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:10.990655  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:19:10.990669  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:10.990676  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:10.990681  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:10.990686  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:10 GMT
	I1128 03:19:10.990691  356731 round_trippers.go:580]     Audit-Id: f0fd5149-830e-458b-b6fe-faa57a8fa39c
	I1128 03:19:10.990696  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:10.990701  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:10.990953  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-112998","namespace":"kube-system","uid":"9c108920-a3e5-4377-96a3-97a4538555a0","resourceVersion":"883","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"8aad7d6fb2125381c02e5fd8434005a3","kubernetes.io/config.mirror":"8aad7d6fb2125381c02e5fd8434005a3","kubernetes.io/config.seen":"2023-11-28T03:04:44.384314206Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6950 chars]
	I1128 03:19:10.991281  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:19:10.991295  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:10.991301  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:10.991307  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:10.993177  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:19:10.993191  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:10.993197  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:10.993202  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:10.993213  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:10.993221  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:10 GMT
	I1128 03:19:10.993230  356731 round_trippers.go:580]     Audit-Id: de33a5bf-2387-4f82-b8b2-8bf33d3f7459
	I1128 03:19:10.993240  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:10.993359  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"911","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1128 03:19:10.993645  356731 pod_ready.go:92] pod "kube-controller-manager-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:19:10.993659  356731 pod_ready.go:81] duration metric: took 5.011809ms waiting for pod "kube-controller-manager-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:10.993667  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bm5x4" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:11.150640  356731 request.go:629] Waited for 156.913722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bm5x4
	I1128 03:19:11.150711  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bm5x4
	I1128 03:19:11.150719  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:11.150735  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:11.150749  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:11.154528  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:19:11.154554  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:11.154562  356731 round_trippers.go:580]     Audit-Id: 00d97c8b-8dba-44c5-be1c-c40cecf70069
	I1128 03:19:11.154568  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:11.154573  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:11.154578  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:11.154583  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:11.154588  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:11 GMT
	I1128 03:19:11.155649  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bm5x4","generateName":"kube-proxy-","namespace":"kube-system","uid":"c478a3ff-3c8e-4f10-88c1-2b6f62b1699d","resourceVersion":"1230","creationTimestamp":"2023-11-28T03:06:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:06:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I1128 03:19:11.350506  356731 request.go:629] Waited for 194.346764ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m03
	I1128 03:19:11.350598  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m03
	I1128 03:19:11.350605  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:11.350616  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:11.350623  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:11.353283  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:19:11.353312  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:11.353323  356731 round_trippers.go:580]     Audit-Id: 41508f3f-5e5a-4b32-9be2-cc09971654ad
	I1128 03:19:11.353332  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:11.353339  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:11.353347  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:11.353359  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:11.353370  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:11 GMT
	I1128 03:19:11.353544  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m03","uid":"8f53a58e-a6fa-4925-a7f0-fb016cd54291","resourceVersion":"1225","creationTimestamp":"2023-11-28T03:19:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:19:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:19:10Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1128 03:19:11.550466  356731 request.go:629] Waited for 196.492524ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bm5x4
	I1128 03:19:11.550545  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bm5x4
	I1128 03:19:11.550551  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:11.550559  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:11.550566  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:11.553310  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:19:11.553337  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:11.553347  356731 round_trippers.go:580]     Audit-Id: 4a694393-1e0f-44e7-a8a7-e8f9eb045926
	I1128 03:19:11.553356  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:11.553364  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:11.553372  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:11.553397  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:11.553402  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:11 GMT
	I1128 03:19:11.553553  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bm5x4","generateName":"kube-proxy-","namespace":"kube-system","uid":"c478a3ff-3c8e-4f10-88c1-2b6f62b1699d","resourceVersion":"1230","creationTimestamp":"2023-11-28T03:06:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:06:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5887 chars]
	I1128 03:19:11.750368  356731 request.go:629] Waited for 196.325008ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m03
	I1128 03:19:11.750465  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m03
	I1128 03:19:11.750476  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:11.750489  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:11.750513  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:11.753355  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:19:11.753376  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:11.753383  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:11.753389  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:11.753394  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:11.753399  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:11 GMT
	I1128 03:19:11.753404  356731 round_trippers.go:580]     Audit-Id: 3ffb263f-b120-48f8-83d2-d1afd718275b
	I1128 03:19:11.753409  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:11.753556  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m03","uid":"8f53a58e-a6fa-4925-a7f0-fb016cd54291","resourceVersion":"1225","creationTimestamp":"2023-11-28T03:19:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:19:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:19:10Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1128 03:19:12.254171  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bm5x4
	I1128 03:19:12.254201  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:12.254209  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:12.254216  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:12.256772  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:19:12.256798  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:12.256809  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:12.256817  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:12 GMT
	I1128 03:19:12.256825  356731 round_trippers.go:580]     Audit-Id: 5a185e55-4d58-40ff-baf9-b59422d7d743
	I1128 03:19:12.256831  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:12.256836  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:12.256842  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:12.257057  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bm5x4","generateName":"kube-proxy-","namespace":"kube-system","uid":"c478a3ff-3c8e-4f10-88c1-2b6f62b1699d","resourceVersion":"1240","creationTimestamp":"2023-11-28T03:06:40Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:06:40Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I1128 03:19:12.257484  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m03
	I1128 03:19:12.257500  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:12.257510  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:12.257519  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:12.259308  356731 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1128 03:19:12.259323  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:12.259329  356731 round_trippers.go:580]     Audit-Id: 46027542-344a-4acb-a798-9e76642ce993
	I1128 03:19:12.259335  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:12.259340  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:12.259348  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:12.259353  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:12.259361  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:12 GMT
	I1128 03:19:12.259476  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m03","uid":"8f53a58e-a6fa-4925-a7f0-fb016cd54291","resourceVersion":"1225","creationTimestamp":"2023-11-28T03:19:10Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:19:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:19:10Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I1128 03:19:12.259779  356731 pod_ready.go:92] pod "kube-proxy-bm5x4" in "kube-system" namespace has status "Ready":"True"
	I1128 03:19:12.259798  356731 pod_ready.go:81] duration metric: took 1.266124774s waiting for pod "kube-proxy-bm5x4" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:12.259811  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-bmr6b" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:12.350079  356731 request.go:629] Waited for 90.197234ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmr6b
	I1128 03:19:12.350254  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-bmr6b
	I1128 03:19:12.350275  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:12.350287  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:12.350299  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:12.356219  356731 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1128 03:19:12.356247  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:12.356254  356731 round_trippers.go:580]     Audit-Id: 46c47678-a6a2-4d30-8fd8-c3ca60849ba5
	I1128 03:19:12.356259  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:12.356264  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:12.356269  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:12.356275  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:12.356280  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:12 GMT
	I1128 03:19:12.356504  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-bmr6b","generateName":"kube-proxy-","namespace":"kube-system","uid":"0d9b86f2-025d-424d-a66f-ad3255685aca","resourceVersion":"860","creationTimestamp":"2023-11-28T03:04:56Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5514 chars]
	I1128 03:19:12.550482  356731 request.go:629] Waited for 193.387974ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:19:12.550548  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:19:12.550553  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:12.550560  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:12.550575  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:12.554148  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:19:12.554177  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:12.554187  356731 round_trippers.go:580]     Audit-Id: 97bc92b2-6e91-48d8-a36a-6b2e223e25ba
	I1128 03:19:12.554196  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:12.554204  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:12.554213  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:12.554221  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:12.554229  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:12 GMT
	I1128 03:19:12.554372  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"911","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1128 03:19:12.554756  356731 pod_ready.go:92] pod "kube-proxy-bmr6b" in "kube-system" namespace has status "Ready":"True"
	I1128 03:19:12.554776  356731 pod_ready.go:81] duration metric: took 294.957991ms waiting for pod "kube-proxy-bmr6b" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:12.554786  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-jgxjs" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:12.750167  356731 request.go:629] Waited for 195.308423ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgxjs
	I1128 03:19:12.750253  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-proxy-jgxjs
	I1128 03:19:12.750280  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:12.750311  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:12.750325  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:12.754598  356731 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1128 03:19:12.754620  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:12.754627  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:12 GMT
	I1128 03:19:12.754643  356731 round_trippers.go:580]     Audit-Id: a596871f-50c5-4959-a5d3-09b213e929b8
	I1128 03:19:12.754648  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:12.754654  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:12.754659  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:12.754667  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:12.754863  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-jgxjs","generateName":"kube-proxy-","namespace":"kube-system","uid":"d8ea73b8-f8e1-4e14-b9cd-4da515a90b3d","resourceVersion":"1063","creationTimestamp":"2023-11-28T03:05:47Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"53c8278c-cdda-40b4-8059-a57076c14b3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:05:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"53c8278c-cdda-40b4-8059-a57076c14b3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I1128 03:19:12.950796  356731 request.go:629] Waited for 195.357603ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:19:12.950882  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998-m02
	I1128 03:19:12.950889  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:12.950897  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:12.950907  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:12.953773  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:19:12.953804  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:12.953814  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:12.953821  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:12.953828  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:12.953835  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:12.953842  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:12 GMT
	I1128 03:19:12.953850  356731 round_trippers.go:580]     Audit-Id: 787444e8-203c-4838-8a3c-b3378404a427
	I1128 03:19:12.954171  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998-m02","uid":"25a285c1-84a3-4258-9cf7-d6faf52fd6b2","resourceVersion":"1045","creationTimestamp":"2023-11-28T03:17:20Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:17:20Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:17:20Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3441 chars]
	I1128 03:19:12.954439  356731 pod_ready.go:92] pod "kube-proxy-jgxjs" in "kube-system" namespace has status "Ready":"True"
	I1128 03:19:12.954453  356731 pod_ready.go:81] duration metric: took 399.65882ms waiting for pod "kube-proxy-jgxjs" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:12.954461  356731 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:13.150938  356731 request.go:629] Waited for 196.383897ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-112998
	I1128 03:19:13.151042  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-112998
	I1128 03:19:13.151055  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:13.151066  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:13.151076  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:13.153944  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:19:13.153969  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:13.153978  356731 round_trippers.go:580]     Audit-Id: 50fdc884-b96c-40e4-816e-21eb73880cc1
	I1128 03:19:13.153986  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:13.153994  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:13.154001  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:13.154008  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:13.154016  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:13 GMT
	I1128 03:19:13.154269  356731 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-112998","namespace":"kube-system","uid":"b32dbcd4-76a8-4b87-b7d8-701f78a8285f","resourceVersion":"875","creationTimestamp":"2023-11-28T03:04:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"49372038efccb5b42d91203468562dfb","kubernetes.io/config.mirror":"49372038efccb5b42d91203468562dfb","kubernetes.io/config.seen":"2023-11-28T03:04:44.384315431Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-11-28T03:04:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4680 chars]
	I1128 03:19:13.350050  356731 request.go:629] Waited for 195.327939ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:19:13.350130  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes/multinode-112998
	I1128 03:19:13.350137  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:13.350148  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:13.350155  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:13.352731  356731 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1128 03:19:13.352754  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:13.352763  356731 round_trippers.go:580]     Audit-Id: f58fd1f2-0498-4d59-bc52-7ca1236841d0
	I1128 03:19:13.352771  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:13.352778  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:13.352785  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:13.352792  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:13.352801  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:13 GMT
	I1128 03:19:13.353167  356731 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"911","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-11-28T03:04:41Z","fieldsType":"FieldsV1","fiel [truncated 6212 chars]
	I1128 03:19:13.353628  356731 pod_ready.go:92] pod "kube-scheduler-multinode-112998" in "kube-system" namespace has status "Ready":"True"
	I1128 03:19:13.353650  356731 pod_ready.go:81] duration metric: took 399.173549ms waiting for pod "kube-scheduler-multinode-112998" in "kube-system" namespace to be "Ready" ...
	I1128 03:19:13.353665  356731 pod_ready.go:38] duration metric: took 2.390885373s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 03:19:13.353680  356731 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 03:19:13.353739  356731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 03:19:13.369616  356731 system_svc.go:56] duration metric: took 15.929041ms WaitForService to wait for kubelet.
	I1128 03:19:13.369650  356731 kubeadm.go:581] duration metric: took 2.443133494s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 03:19:13.369669  356731 node_conditions.go:102] verifying NodePressure condition ...
	I1128 03:19:13.550059  356731 request.go:629] Waited for 180.280875ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.73:8443/api/v1/nodes
	I1128 03:19:13.550141  356731 round_trippers.go:463] GET https://192.168.39.73:8443/api/v1/nodes
	I1128 03:19:13.550149  356731 round_trippers.go:469] Request Headers:
	I1128 03:19:13.550159  356731 round_trippers.go:473]     Accept: application/json, */*
	I1128 03:19:13.550169  356731 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1128 03:19:13.553214  356731 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1128 03:19:13.553242  356731 round_trippers.go:577] Response Headers:
	I1128 03:19:13.553249  356731 round_trippers.go:580]     Audit-Id: 5df9586f-4cb1-4b84-a0b8-3d51f941abbb
	I1128 03:19:13.553255  356731 round_trippers.go:580]     Cache-Control: no-cache, private
	I1128 03:19:13.553261  356731 round_trippers.go:580]     Content-Type: application/json
	I1128 03:19:13.553269  356731 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 76d69e9a-26b2-4c9d-89ad-10588a4ca3a0
	I1128 03:19:13.553278  356731 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: beef3980-3269-470e-b650-2e09695f8ee6
	I1128 03:19:13.553287  356731 round_trippers.go:580]     Date: Tue, 28 Nov 2023 03:19:13 GMT
	I1128 03:19:13.553824  356731 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1244"},"items":[{"metadata":{"name":"multinode-112998","uid":"8ff76bc1-c172-480b-b9f7-6fa63cf6084b","resourceVersion":"911","creationTimestamp":"2023-11-28T03:04:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-112998","kubernetes.io/os":"linux","minikube.k8s.io/commit":"4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9","minikube.k8s.io/name":"multinode-112998","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_11_28T03_04_45_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15133 chars]
	I1128 03:19:13.554608  356731 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 03:19:13.554634  356731 node_conditions.go:123] node cpu capacity is 2
	I1128 03:19:13.554647  356731 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 03:19:13.554655  356731 node_conditions.go:123] node cpu capacity is 2
	I1128 03:19:13.554663  356731 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 03:19:13.554672  356731 node_conditions.go:123] node cpu capacity is 2
	I1128 03:19:13.554681  356731 node_conditions.go:105] duration metric: took 185.004332ms to run NodePressure ...
	I1128 03:19:13.554698  356731 start.go:228] waiting for startup goroutines ...
	I1128 03:19:13.554724  356731 start.go:242] writing updated cluster config ...
	I1128 03:19:13.555119  356731 ssh_runner.go:195] Run: rm -f paused
	I1128 03:19:13.603485  356731 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 03:19:13.606347  356731 out.go:177] * Done! kubectl is now configured to use "multinode-112998" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 03:14:46 UTC, ends at Tue 2023-11-28 03:19:14 UTC. --
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.771048744Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701141554771035238,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5bedabbd-398f-4d5c-827b-2226958891e2 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.771514826Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ab9d186e-1786-4d82-8e19-c40c1a1d4ac4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.771589906Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ab9d186e-1786-4d82-8e19-c40c1a1d4ac4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.771793778Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08efd4e70f0b961d9ce922f498dfd6891bd0ed92607e088c03f842692bb6f2cf,PodSandboxId:4131db186a0371fc71b0feaa413f977a68a8df2f6884d6a0e743bad63be1a2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701141351365631613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80d85aa0-5ee8-48db-a570-fdde6138e079,},Annotations:map[string]string{io.kubernetes.container.hash: 4d1e43e1,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f227bd320dca699fec68154174469d677e7fd3097be84dc41396f6d8e1c6639,PodSandboxId:40333009d57c90a778434ccb70e2c1c65d767a9efcb6cde03895d2816ef4423a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701141328611681311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmx8j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7feaf891-161d-47cb-842c-1357fb63956c,},Annotations:map[string]string{io.kubernetes.container.hash: 571598c0,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32cae1b4439274e0b9e9a4b7628aa213286302bd8188205827930e6dcb5ae2b8,PodSandboxId:9bb9627c2a1221b3cd69fba7d98273994f28995aff86a6b6042283fc3b5319c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701141327629015078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sd64m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d5cae9f-6647-42f9-a8e7-1f14dc9fa422,},Annotations:map[string]string{io.kubernetes.container.hash: 689e676b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176e2aef709cd1b3c1a0c75ce816c7852d7c67df814c74e1a5cfdec3a3c81912,PodSandboxId:1dca908ffc8d5241b2f53146ab6775093ad9049c1679e6815700bd35997b7c84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701141322587918664,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5pfcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 370f4bc7-f3dd-456e-b67a-fff569e42ac1,},Annotations:map[string]string{io.kubernetes.container.hash: d2bcb8b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb93493eade1dc8db64e37a5f9c7bf06fc62099961dad8b3963e5bcea94d56ab,PodSandboxId:4131db186a0371fc71b0feaa413f977a68a8df2f6884d6a0e743bad63be1a2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701141320165596182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 80d85aa0-5ee8-48db-a570-fdde6138e079,},Annotations:map[string]string{io.kubernetes.container.hash: 4d1e43e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f99b4dc666d05710cf41cd028faf03b3cd9fc52191c549adc097cd98c531bea8,PodSandboxId:c0cebd6839ee3527d59783986bcbf2e971b422fd2efba88210bce233e44eb3a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701141320192982884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9b86f2-025d-424d-a66f-ad325568
5aca,},Annotations:map[string]string{io.kubernetes.container.hash: e26949d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7896280da1e7e7ef9eff937d65fa87be0a237c8548600642a471d8ea24f0a574,PodSandboxId:06cb6db116f4bdcf40a8a7128fe7bba890b0a8fb92c981b8ed1fe3fd34b40bf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701141313579021169,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424bc6684b5cae600504832fd6cb287f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: fdf50157,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9a7f3aa443ec85aa178c76d4693851739800be5f0f81e06b46530bf5cc5a80,PodSandboxId:97a42f1f8dde80cb32931f92df4898682eef23dcffc8f2e56448600bf506080a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701141313512521725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49372038efccb5b42d91203468562dfb,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f974640a8d33695129d6b5dfdd8fb2ba7da3370cce89d12759cca6a187ac79e,PodSandboxId:952f8e13f943bb05cd56ecb1a099779ddecf6528d0aeeb8d0c4887c58637ef31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701141313222697978,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aad7d6fb2125381c02e5fd8434005a3,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4094dd39c10d9fc256e3a10ba0788e0bfe21cc22d9859c4b06c810c98939e714,PodSandboxId:ce05884871d873cf42f004e00d8cd10339001441a1e88c99ef47f6e28a295e6b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701141313114067284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38601fa395350043ca26b7c11be4397,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 461cc332,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ab9d186e-1786-4d82-8e19-c40c1a1d4ac4 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.808836141Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e626a9e3-5253-4516-8e7d-618e2d0a4f04 name=/runtime.v1.RuntimeService/Version
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.808918039Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e626a9e3-5253-4516-8e7d-618e2d0a4f04 name=/runtime.v1.RuntimeService/Version
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.811145281Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=23dc748c-e9d0-405d-911d-6cb9c7d78ca7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.811589428Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701141554811576404,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=23dc748c-e9d0-405d-911d-6cb9c7d78ca7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.812191600Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ed42e7c1-b378-4adf-86bf-c9c2010f28cb name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.812263156Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ed42e7c1-b378-4adf-86bf-c9c2010f28cb name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.812542779Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08efd4e70f0b961d9ce922f498dfd6891bd0ed92607e088c03f842692bb6f2cf,PodSandboxId:4131db186a0371fc71b0feaa413f977a68a8df2f6884d6a0e743bad63be1a2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701141351365631613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80d85aa0-5ee8-48db-a570-fdde6138e079,},Annotations:map[string]string{io.kubernetes.container.hash: 4d1e43e1,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f227bd320dca699fec68154174469d677e7fd3097be84dc41396f6d8e1c6639,PodSandboxId:40333009d57c90a778434ccb70e2c1c65d767a9efcb6cde03895d2816ef4423a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701141328611681311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmx8j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7feaf891-161d-47cb-842c-1357fb63956c,},Annotations:map[string]string{io.kubernetes.container.hash: 571598c0,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32cae1b4439274e0b9e9a4b7628aa213286302bd8188205827930e6dcb5ae2b8,PodSandboxId:9bb9627c2a1221b3cd69fba7d98273994f28995aff86a6b6042283fc3b5319c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701141327629015078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sd64m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d5cae9f-6647-42f9-a8e7-1f14dc9fa422,},Annotations:map[string]string{io.kubernetes.container.hash: 689e676b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176e2aef709cd1b3c1a0c75ce816c7852d7c67df814c74e1a5cfdec3a3c81912,PodSandboxId:1dca908ffc8d5241b2f53146ab6775093ad9049c1679e6815700bd35997b7c84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701141322587918664,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5pfcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 370f4bc7-f3dd-456e-b67a-fff569e42ac1,},Annotations:map[string]string{io.kubernetes.container.hash: d2bcb8b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb93493eade1dc8db64e37a5f9c7bf06fc62099961dad8b3963e5bcea94d56ab,PodSandboxId:4131db186a0371fc71b0feaa413f977a68a8df2f6884d6a0e743bad63be1a2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701141320165596182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 80d85aa0-5ee8-48db-a570-fdde6138e079,},Annotations:map[string]string{io.kubernetes.container.hash: 4d1e43e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f99b4dc666d05710cf41cd028faf03b3cd9fc52191c549adc097cd98c531bea8,PodSandboxId:c0cebd6839ee3527d59783986bcbf2e971b422fd2efba88210bce233e44eb3a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701141320192982884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9b86f2-025d-424d-a66f-ad325568
5aca,},Annotations:map[string]string{io.kubernetes.container.hash: e26949d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7896280da1e7e7ef9eff937d65fa87be0a237c8548600642a471d8ea24f0a574,PodSandboxId:06cb6db116f4bdcf40a8a7128fe7bba890b0a8fb92c981b8ed1fe3fd34b40bf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701141313579021169,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424bc6684b5cae600504832fd6cb287f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: fdf50157,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9a7f3aa443ec85aa178c76d4693851739800be5f0f81e06b46530bf5cc5a80,PodSandboxId:97a42f1f8dde80cb32931f92df4898682eef23dcffc8f2e56448600bf506080a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701141313512521725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49372038efccb5b42d91203468562dfb,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f974640a8d33695129d6b5dfdd8fb2ba7da3370cce89d12759cca6a187ac79e,PodSandboxId:952f8e13f943bb05cd56ecb1a099779ddecf6528d0aeeb8d0c4887c58637ef31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701141313222697978,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aad7d6fb2125381c02e5fd8434005a3,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4094dd39c10d9fc256e3a10ba0788e0bfe21cc22d9859c4b06c810c98939e714,PodSandboxId:ce05884871d873cf42f004e00d8cd10339001441a1e88c99ef47f6e28a295e6b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701141313114067284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38601fa395350043ca26b7c11be4397,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 461cc332,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ed42e7c1-b378-4adf-86bf-c9c2010f28cb name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.848269738Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1dc1c09d-76e5-4baa-b36f-696feb05c4c9 name=/runtime.v1.RuntimeService/Version
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.848433955Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1dc1c09d-76e5-4baa-b36f-696feb05c4c9 name=/runtime.v1.RuntimeService/Version
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.849763230Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=188182c6-a80e-41a7-a110-6b2a54bf0c26 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.850112109Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701141554850100791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=188182c6-a80e-41a7-a110-6b2a54bf0c26 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.851023981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d5e68547-762c-4928-9548-06581e858c2a name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.851095529Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d5e68547-762c-4928-9548-06581e858c2a name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.851298295Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08efd4e70f0b961d9ce922f498dfd6891bd0ed92607e088c03f842692bb6f2cf,PodSandboxId:4131db186a0371fc71b0feaa413f977a68a8df2f6884d6a0e743bad63be1a2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701141351365631613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80d85aa0-5ee8-48db-a570-fdde6138e079,},Annotations:map[string]string{io.kubernetes.container.hash: 4d1e43e1,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f227bd320dca699fec68154174469d677e7fd3097be84dc41396f6d8e1c6639,PodSandboxId:40333009d57c90a778434ccb70e2c1c65d767a9efcb6cde03895d2816ef4423a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701141328611681311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmx8j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7feaf891-161d-47cb-842c-1357fb63956c,},Annotations:map[string]string{io.kubernetes.container.hash: 571598c0,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32cae1b4439274e0b9e9a4b7628aa213286302bd8188205827930e6dcb5ae2b8,PodSandboxId:9bb9627c2a1221b3cd69fba7d98273994f28995aff86a6b6042283fc3b5319c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701141327629015078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sd64m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d5cae9f-6647-42f9-a8e7-1f14dc9fa422,},Annotations:map[string]string{io.kubernetes.container.hash: 689e676b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176e2aef709cd1b3c1a0c75ce816c7852d7c67df814c74e1a5cfdec3a3c81912,PodSandboxId:1dca908ffc8d5241b2f53146ab6775093ad9049c1679e6815700bd35997b7c84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701141322587918664,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5pfcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 370f4bc7-f3dd-456e-b67a-fff569e42ac1,},Annotations:map[string]string{io.kubernetes.container.hash: d2bcb8b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb93493eade1dc8db64e37a5f9c7bf06fc62099961dad8b3963e5bcea94d56ab,PodSandboxId:4131db186a0371fc71b0feaa413f977a68a8df2f6884d6a0e743bad63be1a2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701141320165596182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 80d85aa0-5ee8-48db-a570-fdde6138e079,},Annotations:map[string]string{io.kubernetes.container.hash: 4d1e43e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f99b4dc666d05710cf41cd028faf03b3cd9fc52191c549adc097cd98c531bea8,PodSandboxId:c0cebd6839ee3527d59783986bcbf2e971b422fd2efba88210bce233e44eb3a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701141320192982884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9b86f2-025d-424d-a66f-ad325568
5aca,},Annotations:map[string]string{io.kubernetes.container.hash: e26949d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7896280da1e7e7ef9eff937d65fa87be0a237c8548600642a471d8ea24f0a574,PodSandboxId:06cb6db116f4bdcf40a8a7128fe7bba890b0a8fb92c981b8ed1fe3fd34b40bf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701141313579021169,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424bc6684b5cae600504832fd6cb287f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: fdf50157,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9a7f3aa443ec85aa178c76d4693851739800be5f0f81e06b46530bf5cc5a80,PodSandboxId:97a42f1f8dde80cb32931f92df4898682eef23dcffc8f2e56448600bf506080a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701141313512521725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49372038efccb5b42d91203468562dfb,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f974640a8d33695129d6b5dfdd8fb2ba7da3370cce89d12759cca6a187ac79e,PodSandboxId:952f8e13f943bb05cd56ecb1a099779ddecf6528d0aeeb8d0c4887c58637ef31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701141313222697978,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aad7d6fb2125381c02e5fd8434005a3,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4094dd39c10d9fc256e3a10ba0788e0bfe21cc22d9859c4b06c810c98939e714,PodSandboxId:ce05884871d873cf42f004e00d8cd10339001441a1e88c99ef47f6e28a295e6b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701141313114067284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38601fa395350043ca26b7c11be4397,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 461cc332,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d5e68547-762c-4928-9548-06581e858c2a name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.887315943Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a15681dd-fca4-4fd7-ae51-91984b69fd06 name=/runtime.v1.RuntimeService/Version
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.887451147Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a15681dd-fca4-4fd7-ae51-91984b69fd06 name=/runtime.v1.RuntimeService/Version
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.888309747Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=8a747ae9-e16a-470d-aa5e-6ff135a16f0c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.888815646Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701141554888800940,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8a747ae9-e16a-470d-aa5e-6ff135a16f0c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.889257973Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6c840cd3-c2c8-4c7b-812d-6b1f985f18d5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.889333817Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6c840cd3-c2c8-4c7b-812d-6b1f985f18d5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 03:19:14 multinode-112998 crio[710]: time="2023-11-28 03:19:14.889635956Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:08efd4e70f0b961d9ce922f498dfd6891bd0ed92607e088c03f842692bb6f2cf,PodSandboxId:4131db186a0371fc71b0feaa413f977a68a8df2f6884d6a0e743bad63be1a2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701141351365631613,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 80d85aa0-5ee8-48db-a570-fdde6138e079,},Annotations:map[string]string{io.kubernetes.container.hash: 4d1e43e1,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7f227bd320dca699fec68154174469d677e7fd3097be84dc41396f6d8e1c6639,PodSandboxId:40333009d57c90a778434ccb70e2c1c65d767a9efcb6cde03895d2816ef4423a,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1701141328611681311,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-pmx8j,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7feaf891-161d-47cb-842c-1357fb63956c,},Annotations:map[string]string{io.kubernetes.container.hash: 571598c0,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:32cae1b4439274e0b9e9a4b7628aa213286302bd8188205827930e6dcb5ae2b8,PodSandboxId:9bb9627c2a1221b3cd69fba7d98273994f28995aff86a6b6042283fc3b5319c2,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701141327629015078,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-sd64m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d5cae9f-6647-42f9-a8e7-1f14dc9fa422,},Annotations:map[string]string{io.kubernetes.container.hash: 689e676b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:176e2aef709cd1b3c1a0c75ce816c7852d7c67df814c74e1a5cfdec3a3c81912,PodSandboxId:1dca908ffc8d5241b2f53146ab6775093ad9049c1679e6815700bd35997b7c84,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1701141322587918664,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-5pfcd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 370f4bc7-f3dd-456e-b67a-fff569e42ac1,},Annotations:map[string]string{io.kubernetes.container.hash: d2bcb8b6,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb93493eade1dc8db64e37a5f9c7bf06fc62099961dad8b3963e5bcea94d56ab,PodSandboxId:4131db186a0371fc71b0feaa413f977a68a8df2f6884d6a0e743bad63be1a2a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701141320165596182,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.ui
d: 80d85aa0-5ee8-48db-a570-fdde6138e079,},Annotations:map[string]string{io.kubernetes.container.hash: 4d1e43e1,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f99b4dc666d05710cf41cd028faf03b3cd9fc52191c549adc097cd98c531bea8,PodSandboxId:c0cebd6839ee3527d59783986bcbf2e971b422fd2efba88210bce233e44eb3a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701141320192982884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-bmr6b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d9b86f2-025d-424d-a66f-ad325568
5aca,},Annotations:map[string]string{io.kubernetes.container.hash: e26949d4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7896280da1e7e7ef9eff937d65fa87be0a237c8548600642a471d8ea24f0a574,PodSandboxId:06cb6db116f4bdcf40a8a7128fe7bba890b0a8fb92c981b8ed1fe3fd34b40bf3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701141313579021169,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 424bc6684b5cae600504832fd6cb287f,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: fdf50157,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee9a7f3aa443ec85aa178c76d4693851739800be5f0f81e06b46530bf5cc5a80,PodSandboxId:97a42f1f8dde80cb32931f92df4898682eef23dcffc8f2e56448600bf506080a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701141313512521725,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49372038efccb5b42d91203468562dfb,},Annotations:map[string]string{io.kubernetes.container.has
h: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f974640a8d33695129d6b5dfdd8fb2ba7da3370cce89d12759cca6a187ac79e,PodSandboxId:952f8e13f943bb05cd56ecb1a099779ddecf6528d0aeeb8d0c4887c58637ef31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701141313222697978,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8aad7d6fb2125381c02e5fd8434005a3,},Annotations:map[string]string{io.
kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4094dd39c10d9fc256e3a10ba0788e0bfe21cc22d9859c4b06c810c98939e714,PodSandboxId:ce05884871d873cf42f004e00d8cd10339001441a1e88c99ef47f6e28a295e6b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701141313114067284,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-112998,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f38601fa395350043ca26b7c11be4397,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 461cc332,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6c840cd3-c2c8-4c7b-812d-6b1f985f18d5 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	08efd4e70f0b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   4131db186a037       storage-provisioner
	7f227bd320dca       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   40333009d57c9       busybox-5bc68d56bd-pmx8j
	32cae1b443927       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   9bb9627c2a122       coredns-5dd5756b68-sd64m
	176e2aef709cd       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   1dca908ffc8d5       kindnet-5pfcd
	f99b4dc666d05       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   c0cebd6839ee3       kube-proxy-bmr6b
	cb93493eade1d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   4131db186a037       storage-provisioner
	7896280da1e7e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      4 minutes ago       Running             etcd                      1                   06cb6db116f4b       etcd-multinode-112998
	ee9a7f3aa443e       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      4 minutes ago       Running             kube-scheduler            1                   97a42f1f8dde8       kube-scheduler-multinode-112998
	4f974640a8d33       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      4 minutes ago       Running             kube-controller-manager   1                   952f8e13f943b       kube-controller-manager-multinode-112998
	4094dd39c10d9       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      4 minutes ago       Running             kube-apiserver            1                   ce05884871d87       kube-apiserver-multinode-112998
	
	* 
	* ==> coredns [32cae1b4439274e0b9e9a4b7628aa213286302bd8188205827930e6dcb5ae2b8] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50592 - 23106 "HINFO IN 4929306125622765433.3632319494332211947. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028771808s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-112998
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-112998
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=multinode-112998
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T03_04_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 03:04:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-112998
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 03:19:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 03:15:49 +0000   Tue, 28 Nov 2023 03:04:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 03:15:49 +0000   Tue, 28 Nov 2023 03:04:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 03:15:49 +0000   Tue, 28 Nov 2023 03:04:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 03:15:49 +0000   Tue, 28 Nov 2023 03:15:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    multinode-112998
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 1bda6ed0d564437f8712556bb0d814ca
	  System UUID:                1bda6ed0-d564-437f-8712-556bb0d814ca
	  Boot ID:                    2ce55e3d-6965-4e8c-9668-67836259eaa7
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-pmx8j                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-sd64m                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-multinode-112998                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-5pfcd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-multinode-112998             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-112998    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-bmr6b                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-multinode-112998             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 14m                  kube-proxy       
	  Normal  Starting                 3m54s                kube-proxy       
	  Normal  Starting                 14m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)    kubelet          Node multinode-112998 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)    kubelet          Node multinode-112998 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)    kubelet          Node multinode-112998 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     14m                  kubelet          Node multinode-112998 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  14m                  kubelet          Node multinode-112998 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                  kubelet          Node multinode-112998 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           14m                  node-controller  Node multinode-112998 event: Registered Node multinode-112998 in Controller
	  Normal  NodeReady                14m                  kubelet          Node multinode-112998 status is now: NodeReady
	  Normal  Starting                 4m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m3s (x8 over 4m3s)  kubelet          Node multinode-112998 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m3s (x8 over 4m3s)  kubelet          Node multinode-112998 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m3s (x7 over 4m3s)  kubelet          Node multinode-112998 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m44s                node-controller  Node multinode-112998 event: Registered Node multinode-112998 in Controller
	
	
	Name:               multinode-112998-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-112998-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 03:17:20 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-112998-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 03:19:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 03:17:20 +0000   Tue, 28 Nov 2023 03:17:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 03:17:20 +0000   Tue, 28 Nov 2023 03:17:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 03:17:20 +0000   Tue, 28 Nov 2023 03:17:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 03:17:20 +0000   Tue, 28 Nov 2023 03:17:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.31
	  Hostname:    multinode-112998-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 47452ea11cdf4a348286be6a25ec050b
	  System UUID:                47452ea1-1cdf-4a34-8286-be6a25ec050b
	  Boot ID:                    bacfa77d-60b4-4117-96a9-be81f94f3280
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-b66lc    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9s
	  kube-system                 kindnet-v2g52               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-jgxjs            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 112s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-112998-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-112998-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-112998-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet     Node multinode-112998-m02 status is now: NodeReady
	  Normal   NodeNotReady             3m6s                   kubelet     Node multinode-112998-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m30s (x2 over 3m30s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 115s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  115s (x2 over 115s)    kubelet     Node multinode-112998-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    115s (x2 over 115s)    kubelet     Node multinode-112998-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     115s (x2 over 115s)    kubelet     Node multinode-112998-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  115s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                115s                   kubelet     Node multinode-112998-m02 status is now: NodeReady
	
	
	Name:               multinode-112998-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-112998-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 03:19:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-112998-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 03:19:10 +0000   Tue, 28 Nov 2023 03:19:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 03:19:10 +0000   Tue, 28 Nov 2023 03:19:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 03:19:10 +0000   Tue, 28 Nov 2023 03:19:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 03:19:10 +0000   Tue, 28 Nov 2023 03:19:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.192
	  Hostname:    multinode-112998-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6497f41e07b84df69f77ddf36958f894
	  System UUID:                6497f41e-07b8-4df6-9f77-ddf36958f894
	  Boot ID:                    6203a899-3b67-4f4d-83eb-cd95fb7616ec
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-f54s2    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kindnet-587m7               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-bm5x4            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From        Message
	  ----     ------                   ----                ----        -------
	  Normal   Starting                 11m                 kube-proxy  
	  Normal   Starting                 12m                 kube-proxy  
	  Normal   Starting                 3s                  kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)   kubelet     Node multinode-112998-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)   kubelet     Node multinode-112998-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)   kubelet     Node multinode-112998-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                 kubelet     Node multinode-112998-m03 status is now: NodeReady
	  Normal   Starting                 11m                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)   kubelet     Node multinode-112998-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)   kubelet     Node multinode-112998-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)   kubelet     Node multinode-112998-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                 kubelet     Node multinode-112998-m03 status is now: NodeReady
	  Normal   NodeNotReady             84s                 kubelet     Node multinode-112998-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        51s (x2 over 111s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 6s                  kubelet     Starting kubelet.
	  Normal   NodeHasNoDiskPressure    5s (x2 over 6s)     kubelet     Node multinode-112998-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 6s)     kubelet     Node multinode-112998-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                  kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                  kubelet     Node multinode-112998-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x2 over 6s)     kubelet     Node multinode-112998-m03 status is now: NodeHasSufficientMemory
	
	* 
	* ==> dmesg <==
	* [Nov28 03:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.068094] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.378711] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.576430] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.150803] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.524038] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.303100] systemd-fstab-generator[635]: Ignoring "noauto" for root device
	[  +0.112205] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.142033] systemd-fstab-generator[659]: Ignoring "noauto" for root device
	[  +0.095540] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.217689] systemd-fstab-generator[694]: Ignoring "noauto" for root device
	[Nov28 03:15] systemd-fstab-generator[908]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [7896280da1e7e7ef9eff937d65fa87be0a237c8548600642a471d8ea24f0a574] <==
	* {"level":"info","ts":"2023-11-28T03:15:15.516796Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-28T03:15:15.516827Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-11-28T03:15:15.517046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 switched to configuration voters=(2412776101401756344)"}
	{"level":"info","ts":"2023-11-28T03:15:15.517115Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"97141299b087eff6","local-member-id":"217be714ae9a82b8","added-peer-id":"217be714ae9a82b8","added-peer-peer-urls":["https://192.168.39.73:2380"]}
	{"level":"info","ts":"2023-11-28T03:15:15.517316Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"97141299b087eff6","local-member-id":"217be714ae9a82b8","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T03:15:15.51744Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T03:15:15.526335Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-28T03:15:15.526589Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"217be714ae9a82b8","initial-advertise-peer-urls":["https://192.168.39.73:2380"],"listen-peer-urls":["https://192.168.39.73:2380"],"advertise-client-urls":["https://192.168.39.73:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.73:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-28T03:15:15.526616Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-28T03:15:15.526684Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2023-11-28T03:15:15.526689Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.73:2380"}
	{"level":"info","ts":"2023-11-28T03:15:17.391834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 is starting a new election at term 2"}
	{"level":"info","ts":"2023-11-28T03:15:17.391895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-11-28T03:15:17.391927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 received MsgPreVoteResp from 217be714ae9a82b8 at term 2"}
	{"level":"info","ts":"2023-11-28T03:15:17.39194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became candidate at term 3"}
	{"level":"info","ts":"2023-11-28T03:15:17.391946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 received MsgVoteResp from 217be714ae9a82b8 at term 3"}
	{"level":"info","ts":"2023-11-28T03:15:17.391967Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"217be714ae9a82b8 became leader at term 3"}
	{"level":"info","ts":"2023-11-28T03:15:17.391974Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 217be714ae9a82b8 elected leader 217be714ae9a82b8 at term 3"}
	{"level":"info","ts":"2023-11-28T03:15:17.39379Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T03:15:17.394098Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T03:15:17.395185Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.73:2379"}
	{"level":"info","ts":"2023-11-28T03:15:17.3952Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T03:15:17.393835Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"217be714ae9a82b8","local-member-attributes":"{Name:multinode-112998 ClientURLs:[https://192.168.39.73:2379]}","request-path":"/0/members/217be714ae9a82b8/attributes","cluster-id":"97141299b087eff6","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-28T03:15:17.405302Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T03:15:17.411798Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	* 
	* ==> kernel <==
	*  03:19:15 up 4 min,  0 users,  load average: 0.43, 0.32, 0.14
	Linux multinode-112998 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [176e2aef709cd1b3c1a0c75ce816c7852d7c67df814c74e1a5cfdec3a3c81912] <==
	* I1128 03:18:44.279713       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I1128 03:18:44.279847       1 main.go:227] handling current node
	I1128 03:18:44.279873       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I1128 03:18:44.279892       1 main.go:250] Node multinode-112998-m02 has CIDR [10.244.1.0/24] 
	I1128 03:18:44.280007       1 main.go:223] Handling node with IPs: map[192.168.39.192:{}]
	I1128 03:18:44.280027       1 main.go:250] Node multinode-112998-m03 has CIDR [10.244.3.0/24] 
	I1128 03:18:54.285120       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I1128 03:18:54.285176       1 main.go:227] handling current node
	I1128 03:18:54.285189       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I1128 03:18:54.285195       1 main.go:250] Node multinode-112998-m02 has CIDR [10.244.1.0/24] 
	I1128 03:18:54.285450       1 main.go:223] Handling node with IPs: map[192.168.39.192:{}]
	I1128 03:18:54.285485       1 main.go:250] Node multinode-112998-m03 has CIDR [10.244.3.0/24] 
	I1128 03:19:04.300198       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I1128 03:19:04.300258       1 main.go:227] handling current node
	I1128 03:19:04.300279       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I1128 03:19:04.300289       1 main.go:250] Node multinode-112998-m02 has CIDR [10.244.1.0/24] 
	I1128 03:19:04.300660       1 main.go:223] Handling node with IPs: map[192.168.39.192:{}]
	I1128 03:19:04.300712       1 main.go:250] Node multinode-112998-m03 has CIDR [10.244.3.0/24] 
	I1128 03:19:14.311221       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I1128 03:19:14.311602       1 main.go:227] handling current node
	I1128 03:19:14.311654       1 main.go:223] Handling node with IPs: map[192.168.39.31:{}]
	I1128 03:19:14.311679       1 main.go:250] Node multinode-112998-m02 has CIDR [10.244.1.0/24] 
	I1128 03:19:14.311819       1 main.go:223] Handling node with IPs: map[192.168.39.192:{}]
	I1128 03:19:14.311839       1 main.go:250] Node multinode-112998-m03 has CIDR [10.244.2.0/24] 
	I1128 03:19:14.311902       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.39.192 Flags: [] Table: 0} 
	
	* 
	* ==> kube-apiserver [4094dd39c10d9fc256e3a10ba0788e0bfe21cc22d9859c4b06c810c98939e714] <==
	* I1128 03:15:18.841328       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1128 03:15:18.841573       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1128 03:15:18.842317       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1128 03:15:18.842454       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I1128 03:15:18.890450       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1128 03:15:18.894032       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1128 03:15:18.935985       1 shared_informer.go:318] Caches are synced for configmaps
	I1128 03:15:18.936743       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1128 03:15:18.946230       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1128 03:15:18.947650       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1128 03:15:18.947697       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1128 03:15:18.947797       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1128 03:15:18.949279       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1128 03:15:18.950143       1 aggregator.go:166] initial CRD sync complete...
	I1128 03:15:18.950192       1 autoregister_controller.go:141] Starting autoregister controller
	I1128 03:15:18.950200       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1128 03:15:18.950206       1 cache.go:39] Caches are synced for autoregister controller
	E1128 03:15:18.966787       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1128 03:15:19.741682       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1128 03:15:21.432892       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1128 03:15:21.586491       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1128 03:15:21.596739       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1128 03:15:21.725934       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1128 03:15:21.734173       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1128 03:16:08.894273       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [4f974640a8d33695129d6b5dfdd8fb2ba7da3370cce89d12759cca6a187ac79e] <==
	* I1128 03:17:19.143702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="34.102µs"
	I1128 03:17:19.940550       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-112998-m03"
	I1128 03:17:20.636865       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-112998-m03"
	I1128 03:17:20.638289       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-112998-m02\" does not exist"
	I1128 03:17:20.638604       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-cbjtg" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-cbjtg"
	I1128 03:17:20.664473       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-112998-m02" podCIDRs=["10.244.1.0/24"]
	I1128 03:17:20.785832       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-112998-m03"
	I1128 03:17:21.568725       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="43.569µs"
	I1128 03:17:34.814920       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="59.512µs"
	I1128 03:17:35.405965       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="81.798µs"
	I1128 03:17:35.411554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="54.9µs"
	I1128 03:17:51.669737       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-112998-m02"
	I1128 03:19:06.345071       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-b66lc"
	I1128 03:19:06.361913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.419494ms"
	I1128 03:19:06.391773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="29.74039ms"
	I1128 03:19:06.391890       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="37.675µs"
	I1128 03:19:07.680304       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="8.081359ms"
	I1128 03:19:07.680719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="124.29µs"
	I1128 03:19:09.358474       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-112998-m02"
	I1128 03:19:10.014816       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-112998-m03\" does not exist"
	I1128 03:19:10.015026       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-112998-m02"
	I1128 03:19:10.017219       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-f54s2" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-f54s2"
	I1128 03:19:10.034700       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-112998-m03" podCIDRs=["10.244.2.0/24"]
	I1128 03:19:10.091691       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-112998-m02"
	I1128 03:19:10.917719       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="110.555µs"
	
	* 
	* ==> kube-proxy [f99b4dc666d05710cf41cd028faf03b3cd9fc52191c549adc097cd98c531bea8] <==
	* I1128 03:15:20.457341       1 server_others.go:69] "Using iptables proxy"
	I1128 03:15:20.470901       1 node.go:141] Successfully retrieved node IP: 192.168.39.73
	I1128 03:15:20.539883       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1128 03:15:20.539932       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 03:15:20.546108       1 server_others.go:152] "Using iptables Proxier"
	I1128 03:15:20.546167       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 03:15:20.546309       1 server.go:846] "Version info" version="v1.28.4"
	I1128 03:15:20.546317       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 03:15:20.547620       1 config.go:188] "Starting service config controller"
	I1128 03:15:20.547654       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 03:15:20.547675       1 config.go:97] "Starting endpoint slice config controller"
	I1128 03:15:20.547681       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 03:15:20.548004       1 config.go:315] "Starting node config controller"
	I1128 03:15:20.548009       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 03:15:20.652582       1 shared_informer.go:318] Caches are synced for node config
	I1128 03:15:20.652633       1 shared_informer.go:318] Caches are synced for service config
	I1128 03:15:20.652657       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	* 
	* ==> kube-scheduler [ee9a7f3aa443ec85aa178c76d4693851739800be5f0f81e06b46530bf5cc5a80] <==
	* I1128 03:15:15.748778       1 serving.go:348] Generated self-signed cert in-memory
	W1128 03:15:18.870757       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1128 03:15:18.870806       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 03:15:18.870816       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1128 03:15:18.870826       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1128 03:15:18.917185       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1128 03:15:18.917314       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 03:15:18.921546       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1128 03:15:18.921626       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1128 03:15:18.923582       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1128 03:15:18.923657       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1128 03:15:19.023687       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 03:14:46 UTC, ends at Tue 2023-11-28 03:19:15 UTC. --
	Nov 28 03:15:22 multinode-112998 kubelet[914]: E1128 03:15:22.786943     914 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 28 03:15:22 multinode-112998 kubelet[914]: E1128 03:15:22.787053     914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0d5cae9f-6647-42f9-a8e7-1f14dc9fa422-config-volume podName:0d5cae9f-6647-42f9-a8e7-1f14dc9fa422 nodeName:}" failed. No retries permitted until 2023-11-28 03:15:26.787036451 +0000 UTC m=+14.906479529 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0d5cae9f-6647-42f9-a8e7-1f14dc9fa422-config-volume") pod "coredns-5dd5756b68-sd64m" (UID: "0d5cae9f-6647-42f9-a8e7-1f14dc9fa422") : object "kube-system"/"coredns" not registered
	Nov 28 03:15:22 multinode-112998 kubelet[914]: E1128 03:15:22.887655     914 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 28 03:15:22 multinode-112998 kubelet[914]: E1128 03:15:22.887677     914 projected.go:198] Error preparing data for projected volume kube-api-access-p5q5c for pod default/busybox-5bc68d56bd-pmx8j: object "default"/"kube-root-ca.crt" not registered
	Nov 28 03:15:22 multinode-112998 kubelet[914]: E1128 03:15:22.887716     914 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7feaf891-161d-47cb-842c-1357fb63956c-kube-api-access-p5q5c podName:7feaf891-161d-47cb-842c-1357fb63956c nodeName:}" failed. No retries permitted until 2023-11-28 03:15:26.887704552 +0000 UTC m=+15.007147629 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-p5q5c" (UniqueName: "kubernetes.io/projected/7feaf891-161d-47cb-842c-1357fb63956c-kube-api-access-p5q5c") pod "busybox-5bc68d56bd-pmx8j" (UID: "7feaf891-161d-47cb-842c-1357fb63956c") : object "default"/"kube-root-ca.crt" not registered
	Nov 28 03:15:23 multinode-112998 kubelet[914]: E1128 03:15:23.139757     914 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-sd64m" podUID="0d5cae9f-6647-42f9-a8e7-1f14dc9fa422"
	Nov 28 03:15:23 multinode-112998 kubelet[914]: E1128 03:15:23.140248     914 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-pmx8j" podUID="7feaf891-161d-47cb-842c-1357fb63956c"
	Nov 28 03:15:24 multinode-112998 kubelet[914]: I1128 03:15:24.418455     914 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 28 03:15:51 multinode-112998 kubelet[914]: I1128 03:15:51.341556     914 scope.go:117] "RemoveContainer" containerID="cb93493eade1dc8db64e37a5f9c7bf06fc62099961dad8b3963e5bcea94d56ab"
	Nov 28 03:16:12 multinode-112998 kubelet[914]: E1128 03:16:12.156962     914 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 03:16:12 multinode-112998 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 03:16:12 multinode-112998 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 03:16:12 multinode-112998 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 03:17:12 multinode-112998 kubelet[914]: E1128 03:17:12.155836     914 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 03:17:12 multinode-112998 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 03:17:12 multinode-112998 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 03:17:12 multinode-112998 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 03:18:12 multinode-112998 kubelet[914]: E1128 03:18:12.158251     914 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 03:18:12 multinode-112998 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 03:18:12 multinode-112998 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 03:18:12 multinode-112998 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 03:19:12 multinode-112998 kubelet[914]: E1128 03:19:12.155889     914 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 03:19:12 multinode-112998 kubelet[914]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 03:19:12 multinode-112998 kubelet[914]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 03:19:12 multinode-112998 kubelet[914]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-112998 -n multinode-112998
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-112998 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (701.08s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 stop
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-112998 stop: exit status 82 (2m1.801543774s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-112998"  ...
	* Stopping node "multinode-112998"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-112998 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 status
E1128 03:21:23.483956  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-112998 status: exit status 3 (18.843672549s)

                                                
                                                
-- stdout --
	multinode-112998
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-112998-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:21:38.589232  359456 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host
	E1128 03:21:38.589273  359456 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-112998 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-112998 -n multinode-112998
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-112998 -n multinode-112998: exit status 3 (3.195510395s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:21:41.949298  359549 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host
	E1128 03:21:41.949320  359549 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-112998" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.84s)

                                                
                                    
x
+
TestPreload (279.85s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-727563 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1128 03:31:23.484161  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:31:37.271580  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-727563 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m17.931796778s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-727563 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-727563 image pull gcr.io/k8s-minikube/busybox: (1.042876433s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-727563
E1128 03:33:34.222995  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 03:33:43.673207  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-727563: exit status 82 (2m1.244808646s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-727563"  ...
	* Stopping node "test-preload-727563"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-727563 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2023-11-28 03:34:18.400649379 +0000 UTC m=+3205.575623517
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-727563 -n test-preload-727563
E1128 03:34:26.530562  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-727563 -n test-preload-727563: exit status 3 (18.690324221s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:34:37.085331  362563 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host
	E1128 03:34:37.085360  362563 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.167:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-727563" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-727563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-727563
--- FAIL: TestPreload (279.85s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (193.74s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.1502644390.exe start -p running-upgrade-498123 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.1502644390.exe start -p running-upgrade-498123 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m12.158023378s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-498123 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-498123 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (59.722797063s)

                                                
                                                
-- stdout --
	* [running-upgrade-498123] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-498123 in cluster running-upgrade-498123
	* Updating the running kvm2 "running-upgrade-498123" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 03:38:51.233939  367654 out.go:296] Setting OutFile to fd 1 ...
	I1128 03:38:51.234123  367654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:38:51.234135  367654 out.go:309] Setting ErrFile to fd 2...
	I1128 03:38:51.234142  367654 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:38:51.234354  367654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 03:38:51.234943  367654 out.go:303] Setting JSON to false
	I1128 03:38:51.236017  367654 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8481,"bootTime":1701134250,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 03:38:51.236083  367654 start.go:138] virtualization: kvm guest
	I1128 03:38:51.238607  367654 out.go:177] * [running-upgrade-498123] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 03:38:51.240682  367654 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 03:38:51.240722  367654 notify.go:220] Checking for updates...
	I1128 03:38:51.242354  367654 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 03:38:51.243951  367654 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:38:51.245510  367654 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 03:38:51.247177  367654 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 03:38:51.248650  367654 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 03:38:51.250476  367654 config.go:182] Loaded profile config "running-upgrade-498123": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1128 03:38:51.250500  367654 start_flags.go:694] config upgrade: Driver=kvm2
	I1128 03:38:51.250514  367654 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1128 03:38:51.250590  367654 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/running-upgrade-498123/config.json ...
	I1128 03:38:51.251108  367654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:38:51.251158  367654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:38:51.266460  367654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36791
	I1128 03:38:51.266901  367654 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:38:51.267598  367654 main.go:141] libmachine: Using API Version  1
	I1128 03:38:51.267625  367654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:38:51.268031  367654 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:38:51.268329  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .DriverName
	I1128 03:38:51.270058  367654 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1128 03:38:51.271437  367654 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 03:38:51.271740  367654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:38:51.271779  367654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:38:51.286497  367654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42589
	I1128 03:38:51.286913  367654 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:38:51.287356  367654 main.go:141] libmachine: Using API Version  1
	I1128 03:38:51.287384  367654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:38:51.287690  367654 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:38:51.287893  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .DriverName
	I1128 03:38:51.322024  367654 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 03:38:51.323407  367654 start.go:298] selected driver: kvm2
	I1128 03:38:51.323422  367654 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-498123 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.31 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 03:38:51.323533  367654 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 03:38:51.324259  367654 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:38:51.324374  367654 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 03:38:51.339291  367654 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 03:38:51.339759  367654 cni.go:84] Creating CNI manager for ""
	I1128 03:38:51.339788  367654 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1128 03:38:51.339804  367654 start_flags.go:323] config:
	{Name:running-upgrade-498123 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.31 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 03:38:51.340040  367654 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:38:51.341916  367654 out.go:177] * Starting control plane node running-upgrade-498123 in cluster running-upgrade-498123
	I1128 03:38:51.343334  367654 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1128 03:38:51.379751  367654 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1128 03:38:51.379894  367654 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/running-upgrade-498123/config.json ...
	I1128 03:38:51.379964  367654 cache.go:107] acquiring lock: {Name:mk195b1c8677a5ebd114d6c7c594a7814686f0be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:38:51.380006  367654 cache.go:107] acquiring lock: {Name:mkdb51b17cbd1c3903570396d6a2e602de3fbdd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:38:51.380042  367654 cache.go:107] acquiring lock: {Name:mka44dcf3eb750950dd6c16a31c74a002b958144 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:38:51.380096  367654 cache.go:107] acquiring lock: {Name:mk823adab4ca03e07b7fa403ff7b90b626294365 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:38:51.380151  367654 cache.go:107] acquiring lock: {Name:mkadb5525cedc22717277e7b9ceadda84f175666 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:38:51.380174  367654 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I1128 03:38:51.380204  367654 cache.go:107] acquiring lock: {Name:mk0a2b61de05fc03214604b3f8a60a8d7ec770ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:38:51.380226  367654 start.go:365] acquiring machines lock for running-upgrade-498123: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 03:38:51.380179  367654 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I1128 03:38:51.379970  367654 cache.go:107] acquiring lock: {Name:mk120102792bce379907b33e29b6a1f16fed90c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:38:51.380369  367654 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I1128 03:38:51.380406  367654 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I1128 03:38:51.380408  367654 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I1128 03:38:51.380448  367654 cache.go:115] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1128 03:38:51.380461  367654 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 502.29µs
	I1128 03:38:51.380485  367654 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1128 03:38:51.379992  367654 cache.go:107] acquiring lock: {Name:mk0d50715374aeab1af5f08cc4536decdbaac80b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:38:51.380184  367654 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1128 03:38:51.380571  367654 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I1128 03:38:51.381889  367654 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I1128 03:38:51.381906  367654 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I1128 03:38:51.381886  367654 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I1128 03:38:51.381933  367654 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I1128 03:38:51.381961  367654 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I1128 03:38:51.381955  367654 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I1128 03:38:51.382149  367654 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I1128 03:38:51.570235  367654 cache.go:162] opening:  /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I1128 03:38:51.603467  367654 cache.go:162] opening:  /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I1128 03:38:51.621682  367654 cache.go:162] opening:  /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I1128 03:38:51.663097  367654 cache.go:157] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1128 03:38:51.663134  367654 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 283.151561ms
	I1128 03:38:51.663150  367654 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1128 03:38:51.663415  367654 cache.go:162] opening:  /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I1128 03:38:51.713410  367654 cache.go:162] opening:  /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I1128 03:38:51.727597  367654 cache.go:162] opening:  /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I1128 03:38:51.804125  367654 cache.go:162] opening:  /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I1128 03:38:52.209255  367654 cache.go:157] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1128 03:38:52.209285  367654 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 829.2798ms
	I1128 03:38:52.209300  367654 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1128 03:38:52.284176  367654 cache.go:157] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1128 03:38:52.284206  367654 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 904.254198ms
	I1128 03:38:52.284219  367654 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1128 03:38:52.664264  367654 cache.go:157] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1128 03:38:52.664292  367654 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.284200887s
	I1128 03:38:52.664305  367654 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1128 03:38:52.675015  367654 cache.go:157] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1128 03:38:52.675048  367654 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.294954662s
	I1128 03:38:52.675060  367654 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1128 03:38:52.862507  367654 cache.go:157] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1128 03:38:52.862541  367654 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.482516967s
	I1128 03:38:52.862554  367654 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1128 03:38:53.247200  367654 cache.go:157] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1128 03:38:53.247230  367654 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 1.867190208s
	I1128 03:38:53.247243  367654 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1128 03:38:53.247260  367654 cache.go:87] Successfully saved all images to host disk.
	I1128 03:39:47.182577  367654 start.go:369] acquired machines lock for "running-upgrade-498123" in 55.802288028s
	I1128 03:39:47.182638  367654 start.go:96] Skipping create...Using existing machine configuration
	I1128 03:39:47.182647  367654 fix.go:54] fixHost starting: minikube
	I1128 03:39:47.183131  367654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:39:47.183192  367654 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:39:47.200707  367654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39301
	I1128 03:39:47.201139  367654 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:39:47.201640  367654 main.go:141] libmachine: Using API Version  1
	I1128 03:39:47.201665  367654 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:39:47.201982  367654 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:39:47.202148  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .DriverName
	I1128 03:39:47.202261  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetState
	I1128 03:39:47.203843  367654 fix.go:102] recreateIfNeeded on running-upgrade-498123: state=Running err=<nil>
	W1128 03:39:47.203867  367654 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 03:39:47.205734  367654 out.go:177] * Updating the running kvm2 "running-upgrade-498123" VM ...
	I1128 03:39:47.207137  367654 machine.go:88] provisioning docker machine ...
	I1128 03:39:47.207163  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .DriverName
	I1128 03:39:47.207376  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetMachineName
	I1128 03:39:47.207513  367654 buildroot.go:166] provisioning hostname "running-upgrade-498123"
	I1128 03:39:47.207541  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetMachineName
	I1128 03:39:47.207671  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHHostname
	I1128 03:39:47.210599  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:47.211090  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:cf:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:37:10 +0000 UTC Type:0 Mac:52:54:00:15:cf:89 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:running-upgrade-498123 Clientid:01:52:54:00:15:cf:89}
	I1128 03:39:47.211125  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined IP address 192.168.50.31 and MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:47.211283  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHPort
	I1128 03:39:47.211461  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHKeyPath
	I1128 03:39:47.211641  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHKeyPath
	I1128 03:39:47.211817  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHUsername
	I1128 03:39:47.212016  367654 main.go:141] libmachine: Using SSH client type: native
	I1128 03:39:47.212600  367654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I1128 03:39:47.212624  367654 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-498123 && echo "running-upgrade-498123" | sudo tee /etc/hostname
	I1128 03:39:47.355949  367654 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-498123
	
	I1128 03:39:47.355988  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHHostname
	I1128 03:39:47.359442  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:47.359835  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:cf:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:37:10 +0000 UTC Type:0 Mac:52:54:00:15:cf:89 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:running-upgrade-498123 Clientid:01:52:54:00:15:cf:89}
	I1128 03:39:47.359864  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined IP address 192.168.50.31 and MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:47.360184  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHPort
	I1128 03:39:47.360496  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHKeyPath
	I1128 03:39:47.360677  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHKeyPath
	I1128 03:39:47.360844  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHUsername
	I1128 03:39:47.361050  367654 main.go:141] libmachine: Using SSH client type: native
	I1128 03:39:47.361600  367654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I1128 03:39:47.361629  367654 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-498123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-498123/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-498123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 03:39:47.497918  367654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 03:39:47.497955  367654 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 03:39:47.497994  367654 buildroot.go:174] setting up certificates
	I1128 03:39:47.498010  367654 provision.go:83] configureAuth start
	I1128 03:39:47.498033  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetMachineName
	I1128 03:39:47.498353  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetIP
	I1128 03:39:47.501617  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:47.502148  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:cf:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:37:10 +0000 UTC Type:0 Mac:52:54:00:15:cf:89 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:running-upgrade-498123 Clientid:01:52:54:00:15:cf:89}
	I1128 03:39:47.502221  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined IP address 192.168.50.31 and MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:47.502480  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHHostname
	I1128 03:39:47.505107  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:47.505446  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:cf:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:37:10 +0000 UTC Type:0 Mac:52:54:00:15:cf:89 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:running-upgrade-498123 Clientid:01:52:54:00:15:cf:89}
	I1128 03:39:47.505476  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined IP address 192.168.50.31 and MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:47.505732  367654 provision.go:138] copyHostCerts
	I1128 03:39:47.505831  367654 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 03:39:47.505855  367654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 03:39:47.505924  367654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 03:39:47.506062  367654 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 03:39:47.506070  367654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 03:39:47.506101  367654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 03:39:47.506173  367654 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 03:39:47.506177  367654 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 03:39:47.506194  367654 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 03:39:47.506247  367654 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-498123 san=[192.168.50.31 192.168.50.31 localhost 127.0.0.1 minikube running-upgrade-498123]
	I1128 03:39:47.725286  367654 provision.go:172] copyRemoteCerts
	I1128 03:39:47.725363  367654 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 03:39:47.725389  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHHostname
	I1128 03:39:47.728931  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:47.729347  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:cf:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:37:10 +0000 UTC Type:0 Mac:52:54:00:15:cf:89 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:running-upgrade-498123 Clientid:01:52:54:00:15:cf:89}
	I1128 03:39:47.729432  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined IP address 192.168.50.31 and MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:47.729553  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHPort
	I1128 03:39:47.729783  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHKeyPath
	I1128 03:39:47.730044  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHUsername
	I1128 03:39:47.730225  367654 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/running-upgrade-498123/id_rsa Username:docker}
	I1128 03:39:47.829683  367654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 03:39:47.848805  367654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 03:39:47.868317  367654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 03:39:47.893641  367654 provision.go:86] duration metric: configureAuth took 395.609584ms
	I1128 03:39:47.893672  367654 buildroot.go:189] setting minikube options for container-runtime
	I1128 03:39:47.893883  367654 config.go:182] Loaded profile config "running-upgrade-498123": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1128 03:39:47.893985  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHHostname
	I1128 03:39:47.896966  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:47.897486  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:cf:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:37:10 +0000 UTC Type:0 Mac:52:54:00:15:cf:89 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:running-upgrade-498123 Clientid:01:52:54:00:15:cf:89}
	I1128 03:39:47.897517  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined IP address 192.168.50.31 and MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:47.897755  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHPort
	I1128 03:39:47.898004  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHKeyPath
	I1128 03:39:47.898195  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHKeyPath
	I1128 03:39:47.898340  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHUsername
	I1128 03:39:47.898547  367654 main.go:141] libmachine: Using SSH client type: native
	I1128 03:39:47.899033  367654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I1128 03:39:47.899070  367654 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 03:39:48.718846  367654 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 03:39:48.718881  367654 machine.go:91] provisioned docker machine in 1.511726383s
	I1128 03:39:48.718910  367654 start.go:300] post-start starting for "running-upgrade-498123" (driver="kvm2")
	I1128 03:39:48.718925  367654 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 03:39:48.718955  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .DriverName
	I1128 03:39:48.719337  367654 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 03:39:48.719386  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHHostname
	I1128 03:39:48.722570  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:48.722981  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:cf:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:37:10 +0000 UTC Type:0 Mac:52:54:00:15:cf:89 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:running-upgrade-498123 Clientid:01:52:54:00:15:cf:89}
	I1128 03:39:48.723026  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined IP address 192.168.50.31 and MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:48.723275  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHPort
	I1128 03:39:48.723546  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHKeyPath
	I1128 03:39:48.723733  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHUsername
	I1128 03:39:48.723884  367654 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/running-upgrade-498123/id_rsa Username:docker}
	I1128 03:39:48.823539  367654 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 03:39:48.829378  367654 info.go:137] Remote host: Buildroot 2019.02.7
	I1128 03:39:48.829431  367654 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 03:39:48.829546  367654 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 03:39:48.829684  367654 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 03:39:48.829858  367654 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 03:39:48.837316  367654 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 03:39:48.853810  367654 start.go:303] post-start completed in 134.882589ms
	I1128 03:39:48.853837  367654 fix.go:56] fixHost completed within 1.671190577s
	I1128 03:39:48.853867  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHHostname
	I1128 03:39:48.857009  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:48.857420  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:cf:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:37:10 +0000 UTC Type:0 Mac:52:54:00:15:cf:89 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:running-upgrade-498123 Clientid:01:52:54:00:15:cf:89}
	I1128 03:39:48.857452  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined IP address 192.168.50.31 and MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:48.857796  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHPort
	I1128 03:39:48.858061  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHKeyPath
	I1128 03:39:48.858250  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHKeyPath
	I1128 03:39:48.858416  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHUsername
	I1128 03:39:48.858623  367654 main.go:141] libmachine: Using SSH client type: native
	I1128 03:39:48.859114  367654 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.31 22 <nil> <nil>}
	I1128 03:39:48.859136  367654 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1128 03:39:48.995207  367654 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701142788.991162552
	
	I1128 03:39:48.995240  367654 fix.go:206] guest clock: 1701142788.991162552
	I1128 03:39:48.995251  367654 fix.go:219] Guest: 2023-11-28 03:39:48.991162552 +0000 UTC Remote: 2023-11-28 03:39:48.853843026 +0000 UTC m=+57.670434099 (delta=137.319526ms)
	I1128 03:39:48.995276  367654 fix.go:190] guest clock delta is within tolerance: 137.319526ms
	I1128 03:39:48.995283  367654 start.go:83] releasing machines lock for "running-upgrade-498123", held for 1.81266924s
	I1128 03:39:48.995310  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .DriverName
	I1128 03:39:48.995568  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetIP
	I1128 03:39:48.999116  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:48.999603  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:cf:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:37:10 +0000 UTC Type:0 Mac:52:54:00:15:cf:89 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:running-upgrade-498123 Clientid:01:52:54:00:15:cf:89}
	I1128 03:39:48.999739  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined IP address 192.168.50.31 and MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:49.000069  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .DriverName
	I1128 03:39:49.000671  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .DriverName
	I1128 03:39:49.000927  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .DriverName
	I1128 03:39:49.001037  367654 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 03:39:49.001115  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHHostname
	I1128 03:39:49.001579  367654 ssh_runner.go:195] Run: cat /version.json
	I1128 03:39:49.001640  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHHostname
	I1128 03:39:49.005271  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:49.006046  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:49.006128  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:cf:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:37:10 +0000 UTC Type:0 Mac:52:54:00:15:cf:89 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:running-upgrade-498123 Clientid:01:52:54:00:15:cf:89}
	I1128 03:39:49.006159  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined IP address 192.168.50.31 and MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:49.006502  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHPort
	I1128 03:39:49.006684  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:15:cf:89", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:37:10 +0000 UTC Type:0 Mac:52:54:00:15:cf:89 Iaid: IPaddr:192.168.50.31 Prefix:24 Hostname:running-upgrade-498123 Clientid:01:52:54:00:15:cf:89}
	I1128 03:39:49.006722  367654 main.go:141] libmachine: (running-upgrade-498123) DBG | domain running-upgrade-498123 has defined IP address 192.168.50.31 and MAC address 52:54:00:15:cf:89 in network minikube-net
	I1128 03:39:49.006786  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHKeyPath
	I1128 03:39:49.006903  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHPort
	I1128 03:39:49.007130  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHKeyPath
	I1128 03:39:49.007175  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHUsername
	I1128 03:39:49.007332  367654 main.go:141] libmachine: (running-upgrade-498123) Calling .GetSSHUsername
	I1128 03:39:49.007375  367654 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/running-upgrade-498123/id_rsa Username:docker}
	I1128 03:39:49.007866  367654 sshutil.go:53] new ssh client: &{IP:192.168.50.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/running-upgrade-498123/id_rsa Username:docker}
	W1128 03:39:49.111443  367654 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1128 03:39:49.111626  367654 ssh_runner.go:195] Run: systemctl --version
	I1128 03:39:49.149091  367654 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 03:39:49.321085  367654 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 03:39:49.328054  367654 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 03:39:49.328141  367654 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 03:39:49.334019  367654 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1128 03:39:49.334050  367654 start.go:472] detecting cgroup driver to use...
	I1128 03:39:49.334119  367654 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 03:39:49.350329  367654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 03:39:49.360356  367654 docker.go:203] disabling cri-docker service (if available) ...
	I1128 03:39:49.360420  367654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 03:39:49.370516  367654 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 03:39:49.385359  367654 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1128 03:39:49.398498  367654 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1128 03:39:49.398579  367654 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 03:39:49.553066  367654 docker.go:219] disabling docker service ...
	I1128 03:39:49.553142  367654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 03:39:50.580395  367654 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.027222035s)
	I1128 03:39:50.580464  367654 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 03:39:50.592401  367654 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 03:39:50.707261  367654 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 03:39:50.852523  367654 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 03:39:50.865307  367654 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 03:39:50.880068  367654 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1128 03:39:50.880135  367654 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:39:50.890306  367654 out.go:177] 
	W1128 03:39:50.891999  367654 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1128 03:39:50.892019  367654 out.go:239] * 
	* 
	W1128 03:39:50.893308  367654 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 03:39:50.895409  367654 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-498123 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:523: *** TestRunningBinaryUpgrade FAILED at 2023-11-28 03:39:50.914510994 +0000 UTC m=+3538.089485153
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-498123 -n running-upgrade-498123
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-498123 -n running-upgrade-498123: exit status 4 (275.333573ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:39:51.156176  368463 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-498123" does not appear in /home/jenkins/minikube-integration/17671-333305/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-498123" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-498123" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-498123
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-498123: (1.102915737s)
--- FAIL: TestRunningBinaryUpgrade (193.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (290.19s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.843684355.exe start -p stopped-upgrade-268578 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.843684355.exe start -p stopped-upgrade-268578 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m5.020643452s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.843684355.exe -p stopped-upgrade-268578 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.843684355.exe -p stopped-upgrade-268578 stop: (1m34.785265752s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-268578 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
E1128 03:43:34.222531  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 03:43:43.673649  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-268578 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m10.369899042s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-268578] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-268578 in cluster stopped-upgrade-268578
	* Restarting existing kvm2 VM for "stopped-upgrade-268578" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 03:43:32.508229  371290 out.go:296] Setting OutFile to fd 1 ...
	I1128 03:43:32.508428  371290 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:43:32.508440  371290 out.go:309] Setting ErrFile to fd 2...
	I1128 03:43:32.508449  371290 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:43:32.508706  371290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 03:43:32.509441  371290 out.go:303] Setting JSON to false
	I1128 03:43:32.510558  371290 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8763,"bootTime":1701134250,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 03:43:32.510641  371290 start.go:138] virtualization: kvm guest
	I1128 03:43:32.513287  371290 out.go:177] * [stopped-upgrade-268578] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 03:43:32.515030  371290 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 03:43:32.516563  371290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 03:43:32.515052  371290 notify.go:220] Checking for updates...
	I1128 03:43:32.519545  371290 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:43:32.520977  371290 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 03:43:32.522438  371290 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 03:43:32.523894  371290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 03:43:32.525795  371290 config.go:182] Loaded profile config "stopped-upgrade-268578": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1128 03:43:32.525814  371290 start_flags.go:694] config upgrade: Driver=kvm2
	I1128 03:43:32.525822  371290 start_flags.go:706] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50
	I1128 03:43:32.525908  371290 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/stopped-upgrade-268578/config.json ...
	I1128 03:43:32.526459  371290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:43:32.526517  371290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:43:32.541903  371290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36987
	I1128 03:43:32.542359  371290 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:43:32.543027  371290 main.go:141] libmachine: Using API Version  1
	I1128 03:43:32.543059  371290 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:43:32.543417  371290 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:43:32.543616  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .DriverName
	I1128 03:43:32.546118  371290 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I1128 03:43:32.547909  371290 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 03:43:32.548243  371290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:43:32.548284  371290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:43:32.563099  371290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37283
	I1128 03:43:32.563593  371290 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:43:32.564181  371290 main.go:141] libmachine: Using API Version  1
	I1128 03:43:32.564206  371290 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:43:32.564591  371290 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:43:32.564795  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .DriverName
	I1128 03:43:32.603728  371290 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 03:43:32.605102  371290 start.go:298] selected driver: kvm2
	I1128 03:43:32.605117  371290 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-268578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.42 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 03:43:32.605225  371290 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 03:43:32.605961  371290 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:43:32.606067  371290 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 03:43:32.621706  371290 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 03:43:32.622099  371290 cni.go:84] Creating CNI manager for ""
	I1128 03:43:32.622122  371290 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I1128 03:43:32.622132  371290 start_flags.go:323] config:
	{Name:stopped-upgrade-268578 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.42 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s GPUs:}
	I1128 03:43:32.622321  371290 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:43:32.624375  371290 out.go:177] * Starting control plane node stopped-upgrade-268578 in cluster stopped-upgrade-268578
	I1128 03:43:32.625769  371290 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W1128 03:43:32.663763  371290 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1128 03:43:32.663910  371290 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/stopped-upgrade-268578/config.json ...
	I1128 03:43:32.664038  371290 cache.go:107] acquiring lock: {Name:mk120102792bce379907b33e29b6a1f16fed90c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:43:32.664086  371290 cache.go:107] acquiring lock: {Name:mk0d50715374aeab1af5f08cc4536decdbaac80b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:43:32.664121  371290 cache.go:107] acquiring lock: {Name:mka44dcf3eb750950dd6c16a31c74a002b958144 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:43:32.664164  371290 cache.go:115] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I1128 03:43:32.664182  371290 cache.go:115] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1128 03:43:32.664201  371290 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 173.167µs
	I1128 03:43:32.664171  371290 cache.go:107] acquiring lock: {Name:mk823adab4ca03e07b7fa403ff7b90b626294365 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:43:32.664217  371290 cache.go:115] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I1128 03:43:32.664206  371290 cache.go:107] acquiring lock: {Name:mk195b1c8677a5ebd114d6c7c594a7814686f0be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:43:32.664229  371290 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 160.687µs
	I1128 03:43:32.664243  371290 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I1128 03:43:32.664240  371290 start.go:365] acquiring machines lock for stopped-upgrade-268578: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 03:43:32.664179  371290 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 93.941µs
	I1128 03:43:32.664216  371290 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1128 03:43:32.664039  371290 cache.go:107] acquiring lock: {Name:mkadb5525cedc22717277e7b9ceadda84f175666 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:43:32.664263  371290 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I1128 03:43:32.664181  371290 cache.go:107] acquiring lock: {Name:mkdb51b17cbd1c3903570396d6a2e602de3fbdd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:43:32.664326  371290 cache.go:115] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I1128 03:43:32.664348  371290 cache.go:115] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I1128 03:43:32.664364  371290 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 194.961µs
	I1128 03:43:32.664378  371290 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I1128 03:43:32.664264  371290 cache.go:115] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I1128 03:43:32.664345  371290 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 317.879µs
	I1128 03:43:32.664403  371290 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 196.198µs
	I1128 03:43:32.664414  371290 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I1128 03:43:32.664416  371290 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I1128 03:43:32.664275  371290 cache.go:115] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I1128 03:43:32.664462  371290 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 299.71µs
	I1128 03:43:32.664493  371290 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I1128 03:43:32.664119  371290 cache.go:107] acquiring lock: {Name:mk0a2b61de05fc03214604b3f8a60a8d7ec770ab Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 03:43:32.664550  371290 cache.go:115] /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I1128 03:43:32.664562  371290 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 466.161µs
	I1128 03:43:32.664574  371290 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I1128 03:43:32.664587  371290 cache.go:87] Successfully saved all images to host disk.
	I1128 03:44:00.121996  371290 start.go:369] acquired machines lock for "stopped-upgrade-268578" in 27.45771824s
	I1128 03:44:00.122059  371290 start.go:96] Skipping create...Using existing machine configuration
	I1128 03:44:00.122070  371290 fix.go:54] fixHost starting: minikube
	I1128 03:44:00.122552  371290 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:44:00.122611  371290 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:44:00.141708  371290 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45853
	I1128 03:44:00.142225  371290 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:44:00.142803  371290 main.go:141] libmachine: Using API Version  1
	I1128 03:44:00.142830  371290 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:44:00.143219  371290 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:44:00.143456  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .DriverName
	I1128 03:44:00.143656  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetState
	I1128 03:44:00.145576  371290 fix.go:102] recreateIfNeeded on stopped-upgrade-268578: state=Stopped err=<nil>
	I1128 03:44:00.145607  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .DriverName
	W1128 03:44:00.145773  371290 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 03:44:00.148376  371290 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-268578" ...
	I1128 03:44:00.150048  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .Start
	I1128 03:44:00.150290  371290 main.go:141] libmachine: (stopped-upgrade-268578) Ensuring networks are active...
	I1128 03:44:00.151288  371290 main.go:141] libmachine: (stopped-upgrade-268578) Ensuring network default is active
	I1128 03:44:00.151938  371290 main.go:141] libmachine: (stopped-upgrade-268578) Ensuring network minikube-net is active
	I1128 03:44:00.152407  371290 main.go:141] libmachine: (stopped-upgrade-268578) Getting domain xml...
	I1128 03:44:00.153254  371290 main.go:141] libmachine: (stopped-upgrade-268578) Creating domain...
	I1128 03:44:01.572114  371290 main.go:141] libmachine: (stopped-upgrade-268578) Waiting to get IP...
	I1128 03:44:01.573184  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:01.573661  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:01.573715  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:01.573638  371580 retry.go:31] will retry after 251.705042ms: waiting for machine to come up
	I1128 03:44:01.827466  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:01.828178  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:01.828208  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:01.828160  371580 retry.go:31] will retry after 375.246344ms: waiting for machine to come up
	I1128 03:44:02.204824  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:02.205660  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:02.205688  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:02.205604  371580 retry.go:31] will retry after 306.214812ms: waiting for machine to come up
	I1128 03:44:02.513198  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:02.513791  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:02.513818  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:02.513742  371580 retry.go:31] will retry after 432.105083ms: waiting for machine to come up
	I1128 03:44:02.947382  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:02.948139  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:02.948176  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:02.948030  371580 retry.go:31] will retry after 655.637687ms: waiting for machine to come up
	I1128 03:44:03.605023  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:03.605519  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:03.605544  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:03.605449  371580 retry.go:31] will retry after 893.411173ms: waiting for machine to come up
	I1128 03:44:04.500338  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:04.500984  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:04.501019  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:04.500936  371580 retry.go:31] will retry after 831.754599ms: waiting for machine to come up
	I1128 03:44:05.334029  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:05.334807  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:05.334835  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:05.334714  371580 retry.go:31] will retry after 1.367193554s: waiting for machine to come up
	I1128 03:44:06.704445  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:06.705284  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:06.705323  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:06.705189  371580 retry.go:31] will retry after 1.248134542s: waiting for machine to come up
	I1128 03:44:07.954662  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:07.955204  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:07.955228  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:07.955150  371580 retry.go:31] will retry after 1.501510823s: waiting for machine to come up
	I1128 03:44:09.458228  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:09.458751  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:09.458787  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:09.458706  371580 retry.go:31] will retry after 2.235077906s: waiting for machine to come up
	I1128 03:44:11.695926  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:11.696460  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:11.696493  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:11.696404  371580 retry.go:31] will retry after 2.886737404s: waiting for machine to come up
	I1128 03:44:14.586278  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:14.586860  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:14.586899  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:14.586818  371580 retry.go:31] will retry after 2.995812625s: waiting for machine to come up
	I1128 03:44:17.583918  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:17.584395  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:17.584423  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:17.584368  371580 retry.go:31] will retry after 4.932742651s: waiting for machine to come up
	I1128 03:44:22.519358  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:22.519821  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:22.519859  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:22.519776  371580 retry.go:31] will retry after 6.963293231s: waiting for machine to come up
	I1128 03:44:29.485979  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:29.486574  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | unable to find current IP address of domain stopped-upgrade-268578 in network minikube-net
	I1128 03:44:29.486601  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | I1128 03:44:29.486510  371580 retry.go:31] will retry after 8.871229095s: waiting for machine to come up
	I1128 03:44:38.359593  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:38.360057  371290 main.go:141] libmachine: (stopped-upgrade-268578) Found IP for machine: 192.168.50.42
	I1128 03:44:38.360074  371290 main.go:141] libmachine: (stopped-upgrade-268578) Reserving static IP address...
	I1128 03:44:38.360101  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has current primary IP address 192.168.50.42 and MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:38.360561  371290 main.go:141] libmachine: (stopped-upgrade-268578) Reserved static IP address: 192.168.50.42
	I1128 03:44:38.360596  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | found host DHCP lease matching {name: "stopped-upgrade-268578", mac: "52:54:00:dc:bf:50", ip: "192.168.50.42"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:44:28 +0000 UTC Type:0 Mac:52:54:00:dc:bf:50 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-268578 Clientid:01:52:54:00:dc:bf:50}
	I1128 03:44:38.360613  371290 main.go:141] libmachine: (stopped-upgrade-268578) Waiting for SSH to be available...
	I1128 03:44:38.360653  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-268578", mac: "52:54:00:dc:bf:50", ip: "192.168.50.42"}
	I1128 03:44:38.360665  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | Getting to WaitForSSH function...
	I1128 03:44:38.362952  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:38.363335  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:bf:50", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:44:28 +0000 UTC Type:0 Mac:52:54:00:dc:bf:50 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-268578 Clientid:01:52:54:00:dc:bf:50}
	I1128 03:44:38.363370  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined IP address 192.168.50.42 and MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:38.363525  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | Using SSH client type: external
	I1128 03:44:38.363561  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | Using SSH private key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/stopped-upgrade-268578/id_rsa (-rw-------)
	I1128 03:44:38.363595  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.42 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17671-333305/.minikube/machines/stopped-upgrade-268578/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 03:44:38.363612  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | About to run SSH command:
	I1128 03:44:38.363641  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | exit 0
	I1128 03:44:38.509765  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | SSH cmd err, output: <nil>: 
	I1128 03:44:38.510203  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetConfigRaw
	I1128 03:44:38.511145  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetIP
	I1128 03:44:38.514636  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:38.515051  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:bf:50", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:44:28 +0000 UTC Type:0 Mac:52:54:00:dc:bf:50 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-268578 Clientid:01:52:54:00:dc:bf:50}
	I1128 03:44:38.515086  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined IP address 192.168.50.42 and MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:38.515427  371290 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/stopped-upgrade-268578/config.json ...
	I1128 03:44:38.515698  371290 machine.go:88] provisioning docker machine ...
	I1128 03:44:38.515727  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .DriverName
	I1128 03:44:38.515945  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetMachineName
	I1128 03:44:38.516152  371290 buildroot.go:166] provisioning hostname "stopped-upgrade-268578"
	I1128 03:44:38.516175  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetMachineName
	I1128 03:44:38.516395  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHHostname
	I1128 03:44:38.519870  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:38.520314  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:bf:50", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:44:28 +0000 UTC Type:0 Mac:52:54:00:dc:bf:50 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-268578 Clientid:01:52:54:00:dc:bf:50}
	I1128 03:44:38.520353  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined IP address 192.168.50.42 and MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:38.520522  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHPort
	I1128 03:44:38.520737  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHKeyPath
	I1128 03:44:38.520928  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHKeyPath
	I1128 03:44:38.521280  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHUsername
	I1128 03:44:38.521546  371290 main.go:141] libmachine: Using SSH client type: native
	I1128 03:44:38.521908  371290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I1128 03:44:38.521923  371290 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-268578 && echo "stopped-upgrade-268578" | sudo tee /etc/hostname
	I1128 03:44:38.664826  371290 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-268578
	
	I1128 03:44:38.664870  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHHostname
	I1128 03:44:38.668338  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:38.668773  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:bf:50", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:44:28 +0000 UTC Type:0 Mac:52:54:00:dc:bf:50 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-268578 Clientid:01:52:54:00:dc:bf:50}
	I1128 03:44:38.668804  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined IP address 192.168.50.42 and MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:38.669080  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHPort
	I1128 03:44:38.669302  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHKeyPath
	I1128 03:44:38.669519  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHKeyPath
	I1128 03:44:38.669720  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHUsername
	I1128 03:44:38.669959  371290 main.go:141] libmachine: Using SSH client type: native
	I1128 03:44:38.670333  371290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I1128 03:44:38.670366  371290 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-268578' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-268578/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-268578' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 03:44:38.815530  371290 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 03:44:38.815560  371290 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 03:44:38.815582  371290 buildroot.go:174] setting up certificates
	I1128 03:44:38.815594  371290 provision.go:83] configureAuth start
	I1128 03:44:38.815602  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetMachineName
	I1128 03:44:38.815963  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetIP
	I1128 03:44:38.819298  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:38.819792  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:bf:50", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:44:28 +0000 UTC Type:0 Mac:52:54:00:dc:bf:50 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-268578 Clientid:01:52:54:00:dc:bf:50}
	I1128 03:44:38.819836  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined IP address 192.168.50.42 and MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:38.820104  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHHostname
	I1128 03:44:38.823056  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:38.823385  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:bf:50", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:44:28 +0000 UTC Type:0 Mac:52:54:00:dc:bf:50 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-268578 Clientid:01:52:54:00:dc:bf:50}
	I1128 03:44:38.823419  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined IP address 192.168.50.42 and MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:38.823587  371290 provision.go:138] copyHostCerts
	I1128 03:44:38.823657  371290 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 03:44:38.823671  371290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 03:44:38.823782  371290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 03:44:38.823917  371290 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 03:44:38.823934  371290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 03:44:38.823979  371290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 03:44:38.824056  371290 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 03:44:38.824062  371290 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 03:44:38.824093  371290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 03:44:38.824152  371290 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-268578 san=[192.168.50.42 192.168.50.42 localhost 127.0.0.1 minikube stopped-upgrade-268578]
	I1128 03:44:39.012489  371290 provision.go:172] copyRemoteCerts
	I1128 03:44:39.012654  371290 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 03:44:39.012716  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHHostname
	I1128 03:44:39.016094  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:39.016599  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:bf:50", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:44:28 +0000 UTC Type:0 Mac:52:54:00:dc:bf:50 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-268578 Clientid:01:52:54:00:dc:bf:50}
	I1128 03:44:39.016664  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined IP address 192.168.50.42 and MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:39.016935  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHPort
	I1128 03:44:39.017143  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHKeyPath
	I1128 03:44:39.017312  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHUsername
	I1128 03:44:39.017512  371290 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/stopped-upgrade-268578/id_rsa Username:docker}
	I1128 03:44:39.120295  371290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 03:44:39.138625  371290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1128 03:44:39.156586  371290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1128 03:44:39.174667  371290 provision.go:86] duration metric: configureAuth took 359.060897ms
	I1128 03:44:39.174695  371290 buildroot.go:189] setting minikube options for container-runtime
	I1128 03:44:39.174878  371290 config.go:182] Loaded profile config "stopped-upgrade-268578": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1128 03:44:39.174985  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHHostname
	I1128 03:44:39.179157  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:39.179632  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:bf:50", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:44:28 +0000 UTC Type:0 Mac:52:54:00:dc:bf:50 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-268578 Clientid:01:52:54:00:dc:bf:50}
	I1128 03:44:39.179650  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHPort
	I1128 03:44:39.179656  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined IP address 192.168.50.42 and MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:39.179801  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHKeyPath
	I1128 03:44:39.180113  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHKeyPath
	I1128 03:44:39.180276  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHUsername
	I1128 03:44:39.180449  371290 main.go:141] libmachine: Using SSH client type: native
	I1128 03:44:39.180934  371290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I1128 03:44:39.180965  371290 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 03:44:41.581696  371290 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 03:44:41.581730  371290 machine.go:91] provisioned docker machine in 3.066014157s
	I1128 03:44:41.581744  371290 start.go:300] post-start starting for "stopped-upgrade-268578" (driver="kvm2")
	I1128 03:44:41.581759  371290 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 03:44:41.581784  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .DriverName
	I1128 03:44:41.582170  371290 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 03:44:41.582212  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHHostname
	I1128 03:44:41.585999  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:41.586374  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:bf:50", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:44:28 +0000 UTC Type:0 Mac:52:54:00:dc:bf:50 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-268578 Clientid:01:52:54:00:dc:bf:50}
	I1128 03:44:41.586417  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined IP address 192.168.50.42 and MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:41.586581  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHPort
	I1128 03:44:41.586786  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHKeyPath
	I1128 03:44:41.586960  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHUsername
	I1128 03:44:41.587144  371290 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/stopped-upgrade-268578/id_rsa Username:docker}
	I1128 03:44:41.677474  371290 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 03:44:41.682277  371290 info.go:137] Remote host: Buildroot 2019.02.7
	I1128 03:44:41.682318  371290 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 03:44:41.682406  371290 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 03:44:41.682477  371290 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 03:44:41.682555  371290 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 03:44:41.688896  371290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 03:44:41.706429  371290 start.go:303] post-start completed in 124.668327ms
	I1128 03:44:41.706457  371290 fix.go:56] fixHost completed within 41.584387293s
	I1128 03:44:41.706485  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHHostname
	I1128 03:44:41.709975  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:41.710364  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:bf:50", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:44:28 +0000 UTC Type:0 Mac:52:54:00:dc:bf:50 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-268578 Clientid:01:52:54:00:dc:bf:50}
	I1128 03:44:41.710397  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined IP address 192.168.50.42 and MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:41.710552  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHPort
	I1128 03:44:41.710769  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHKeyPath
	I1128 03:44:41.710921  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHKeyPath
	I1128 03:44:41.711056  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHUsername
	I1128 03:44:41.711235  371290 main.go:141] libmachine: Using SSH client type: native
	I1128 03:44:41.711548  371290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.50.42 22 <nil> <nil>}
	I1128 03:44:41.711560  371290 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1128 03:44:41.855150  371290 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701143081.781580957
	
	I1128 03:44:41.855175  371290 fix.go:206] guest clock: 1701143081.781580957
	I1128 03:44:41.855185  371290 fix.go:219] Guest: 2023-11-28 03:44:41.781580957 +0000 UTC Remote: 2023-11-28 03:44:41.706462455 +0000 UTC m=+69.249651661 (delta=75.118502ms)
	I1128 03:44:41.855332  371290 fix.go:190] guest clock delta is within tolerance: 75.118502ms
	I1128 03:44:41.855341  371290 start.go:83] releasing machines lock for "stopped-upgrade-268578", held for 41.733310596s
	I1128 03:44:41.855370  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .DriverName
	I1128 03:44:41.855621  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetIP
	I1128 03:44:41.858976  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:41.859448  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:bf:50", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:44:28 +0000 UTC Type:0 Mac:52:54:00:dc:bf:50 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-268578 Clientid:01:52:54:00:dc:bf:50}
	I1128 03:44:41.859482  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined IP address 192.168.50.42 and MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:41.859675  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .DriverName
	I1128 03:44:41.861110  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .DriverName
	I1128 03:44:41.861347  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .DriverName
	I1128 03:44:41.861446  371290 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 03:44:41.861482  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHHostname
	I1128 03:44:41.861943  371290 ssh_runner.go:195] Run: cat /version.json
	I1128 03:44:41.861968  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHHostname
	I1128 03:44:41.865833  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:41.866601  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:bf:50", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:44:28 +0000 UTC Type:0 Mac:52:54:00:dc:bf:50 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-268578 Clientid:01:52:54:00:dc:bf:50}
	I1128 03:44:41.866923  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined IP address 192.168.50.42 and MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:41.867152  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHPort
	I1128 03:44:41.867447  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHKeyPath
	I1128 03:44:41.867612  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHUsername
	I1128 03:44:41.867777  371290 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/stopped-upgrade-268578/id_rsa Username:docker}
	I1128 03:44:41.868446  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:41.872628  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:dc:bf:50", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-11-28 04:44:28 +0000 UTC Type:0 Mac:52:54:00:dc:bf:50 Iaid: IPaddr:192.168.50.42 Prefix:24 Hostname:stopped-upgrade-268578 Clientid:01:52:54:00:dc:bf:50}
	I1128 03:44:41.872661  371290 main.go:141] libmachine: (stopped-upgrade-268578) DBG | domain stopped-upgrade-268578 has defined IP address 192.168.50.42 and MAC address 52:54:00:dc:bf:50 in network minikube-net
	I1128 03:44:41.872908  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHPort
	I1128 03:44:41.873106  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHKeyPath
	I1128 03:44:41.873273  371290 main.go:141] libmachine: (stopped-upgrade-268578) Calling .GetSSHUsername
	I1128 03:44:41.873491  371290 sshutil.go:53] new ssh client: &{IP:192.168.50.42 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/stopped-upgrade-268578/id_rsa Username:docker}
	W1128 03:44:42.003753  371290 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I1128 03:44:42.003844  371290 ssh_runner.go:195] Run: systemctl --version
	I1128 03:44:42.011120  371290 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 03:44:42.317138  371290 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 03:44:42.326020  371290 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 03:44:42.326113  371290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 03:44:42.336439  371290 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1128 03:44:42.336466  371290 start.go:472] detecting cgroup driver to use...
	I1128 03:44:42.336531  371290 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 03:44:42.348103  371290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 03:44:42.357305  371290 docker.go:203] disabling cri-docker service (if available) ...
	I1128 03:44:42.357376  371290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 03:44:42.366110  371290 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 03:44:42.380584  371290 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W1128 03:44:42.394341  371290 docker.go:213] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I1128 03:44:42.394445  371290 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 03:44:42.511059  371290 docker.go:219] disabling docker service ...
	I1128 03:44:42.511853  371290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 03:44:42.526276  371290 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 03:44:42.546253  371290 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 03:44:42.669731  371290 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 03:44:42.773336  371290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 03:44:42.783064  371290 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 03:44:42.796275  371290 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I1128 03:44:42.796350  371290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 03:44:42.805857  371290 out.go:177] 
	W1128 03:44:42.809260  371290 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W1128 03:44:42.809292  371290 out.go:239] * 
	* 
	W1128 03:44:42.810338  371290 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 03:44:42.811965  371290 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-268578 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (290.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (140.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-666657 --alsologtostderr -v=3
E1128 03:49:18.807028  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:49:18.812366  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:49:18.822624  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:49:18.842958  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:49:18.883307  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:49:18.964062  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:49:19.025416  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 03:49:19.030732  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 03:49:19.041028  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 03:49:19.061326  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 03:49:19.101672  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 03:49:19.124890  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:49:19.182163  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 03:49:19.342631  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 03:49:19.446070  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:49:19.663477  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 03:49:20.086581  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:49:20.304132  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 03:49:21.367343  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:49:21.585301  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 03:49:23.928031  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:49:24.145528  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 03:49:29.048478  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:49:29.265878  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-666657 --alsologtostderr -v=3: exit status 82 (2m1.827357827s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-666657"  ...
	* Stopping node "old-k8s-version-666657"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 03:49:15.885712  383866 out.go:296] Setting OutFile to fd 1 ...
	I1128 03:49:15.885917  383866 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:49:15.885928  383866 out.go:309] Setting ErrFile to fd 2...
	I1128 03:49:15.885936  383866 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:49:15.886222  383866 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 03:49:15.886554  383866 out.go:303] Setting JSON to false
	I1128 03:49:15.886663  383866 mustload.go:65] Loading cluster: old-k8s-version-666657
	I1128 03:49:15.887159  383866 config.go:182] Loaded profile config "old-k8s-version-666657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 03:49:15.887270  383866 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/old-k8s-version-666657/config.json ...
	I1128 03:49:15.887468  383866 mustload.go:65] Loading cluster: old-k8s-version-666657
	I1128 03:49:15.887623  383866 config.go:182] Loaded profile config "old-k8s-version-666657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 03:49:15.887662  383866 stop.go:39] StopHost: old-k8s-version-666657
	I1128 03:49:15.888265  383866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:49:15.888351  383866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:49:15.904560  383866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38949
	I1128 03:49:15.905144  383866 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:49:15.905801  383866 main.go:141] libmachine: Using API Version  1
	I1128 03:49:15.905826  383866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:49:15.906299  383866 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:49:15.908768  383866 out.go:177] * Stopping node "old-k8s-version-666657"  ...
	I1128 03:49:15.910176  383866 main.go:141] libmachine: Stopping "old-k8s-version-666657"...
	I1128 03:49:15.910213  383866 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 03:49:15.912142  383866 main.go:141] libmachine: (old-k8s-version-666657) Calling .Stop
	I1128 03:49:15.915734  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 0/60
	I1128 03:49:16.917592  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 1/60
	I1128 03:49:17.918892  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 2/60
	I1128 03:49:18.920393  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 3/60
	I1128 03:49:19.922648  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 4/60
	I1128 03:49:20.924838  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 5/60
	I1128 03:49:21.926496  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 6/60
	I1128 03:49:22.928271  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 7/60
	I1128 03:49:23.930569  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 8/60
	I1128 03:49:24.932040  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 9/60
	I1128 03:49:25.934159  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 10/60
	I1128 03:49:26.935759  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 11/60
	I1128 03:49:27.937436  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 12/60
	I1128 03:49:28.939847  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 13/60
	I1128 03:49:29.941762  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 14/60
	I1128 03:49:30.943422  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 15/60
	I1128 03:49:31.945400  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 16/60
	I1128 03:49:32.947565  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 17/60
	I1128 03:49:33.949015  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 18/60
	I1128 03:49:34.950423  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 19/60
	I1128 03:49:35.952600  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 20/60
	I1128 03:49:36.954566  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 21/60
	I1128 03:49:37.956050  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 22/60
	I1128 03:49:38.957848  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 23/60
	I1128 03:49:39.959477  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 24/60
	I1128 03:49:40.961675  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 25/60
	I1128 03:49:41.963095  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 26/60
	I1128 03:49:42.964811  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 27/60
	I1128 03:49:43.966336  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 28/60
	I1128 03:49:44.967823  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 29/60
	I1128 03:49:45.970219  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 30/60
	I1128 03:49:46.971717  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 31/60
	I1128 03:49:47.973342  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 32/60
	I1128 03:49:48.974848  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 33/60
	I1128 03:49:49.976795  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 34/60
	I1128 03:49:50.978881  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 35/60
	I1128 03:49:51.981008  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 36/60
	I1128 03:49:52.982588  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 37/60
	I1128 03:49:53.984117  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 38/60
	I1128 03:49:54.985351  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 39/60
	I1128 03:49:55.987443  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 40/60
	I1128 03:49:56.988903  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 41/60
	I1128 03:49:57.990150  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 42/60
	I1128 03:49:58.991884  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 43/60
	I1128 03:49:59.993275  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 44/60
	I1128 03:50:00.995321  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 45/60
	I1128 03:50:01.996723  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 46/60
	I1128 03:50:02.998108  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 47/60
	I1128 03:50:03.999470  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 48/60
	I1128 03:50:05.001060  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 49/60
	I1128 03:50:06.003530  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 50/60
	I1128 03:50:07.004979  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 51/60
	I1128 03:50:08.006524  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 52/60
	I1128 03:50:09.007729  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 53/60
	I1128 03:50:10.009084  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 54/60
	I1128 03:50:11.010913  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 55/60
	I1128 03:50:12.012548  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 56/60
	I1128 03:50:13.013873  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 57/60
	I1128 03:50:14.015335  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 58/60
	I1128 03:50:15.016757  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 59/60
	I1128 03:50:16.018076  383866 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 03:50:16.018182  383866 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 03:50:16.018204  383866 retry.go:31] will retry after 1.487010775s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 03:50:17.505846  383866 stop.go:39] StopHost: old-k8s-version-666657
	I1128 03:50:17.506462  383866 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:50:17.506527  383866 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:50:17.521357  383866 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42857
	I1128 03:50:17.521884  383866 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:50:17.522419  383866 main.go:141] libmachine: Using API Version  1
	I1128 03:50:17.522446  383866 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:50:17.522784  383866 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:50:17.525033  383866 out.go:177] * Stopping node "old-k8s-version-666657"  ...
	I1128 03:50:17.526491  383866 main.go:141] libmachine: Stopping "old-k8s-version-666657"...
	I1128 03:50:17.526511  383866 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 03:50:17.528285  383866 main.go:141] libmachine: (old-k8s-version-666657) Calling .Stop
	I1128 03:50:17.531810  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 0/60
	I1128 03:50:18.533396  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 1/60
	I1128 03:50:19.535572  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 2/60
	I1128 03:50:20.537124  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 3/60
	I1128 03:50:21.538669  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 4/60
	I1128 03:50:22.540591  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 5/60
	I1128 03:50:23.542158  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 6/60
	I1128 03:50:24.543515  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 7/60
	I1128 03:50:25.544976  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 8/60
	I1128 03:50:26.546374  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 9/60
	I1128 03:50:27.548382  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 10/60
	I1128 03:50:28.549770  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 11/60
	I1128 03:50:29.551032  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 12/60
	I1128 03:50:30.552463  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 13/60
	I1128 03:50:31.553931  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 14/60
	I1128 03:50:32.556012  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 15/60
	I1128 03:50:33.557903  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 16/60
	I1128 03:50:34.559256  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 17/60
	I1128 03:50:35.560845  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 18/60
	I1128 03:50:36.562474  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 19/60
	I1128 03:50:37.564653  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 20/60
	I1128 03:50:38.566063  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 21/60
	I1128 03:50:39.567571  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 22/60
	I1128 03:50:40.568920  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 23/60
	I1128 03:50:41.570126  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 24/60
	I1128 03:50:42.572088  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 25/60
	I1128 03:50:43.573465  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 26/60
	I1128 03:50:44.575613  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 27/60
	I1128 03:50:45.577341  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 28/60
	I1128 03:50:46.579055  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 29/60
	I1128 03:50:47.580668  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 30/60
	I1128 03:50:48.581910  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 31/60
	I1128 03:50:49.583229  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 32/60
	I1128 03:50:50.584645  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 33/60
	I1128 03:50:51.585714  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 34/60
	I1128 03:50:52.587307  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 35/60
	I1128 03:50:53.588596  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 36/60
	I1128 03:50:54.589942  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 37/60
	I1128 03:50:55.591559  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 38/60
	I1128 03:50:56.592948  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 39/60
	I1128 03:50:57.595124  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 40/60
	I1128 03:50:58.596482  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 41/60
	I1128 03:50:59.597820  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 42/60
	I1128 03:51:00.599330  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 43/60
	I1128 03:51:01.601082  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 44/60
	I1128 03:51:02.602649  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 45/60
	I1128 03:51:03.603978  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 46/60
	I1128 03:51:04.605254  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 47/60
	I1128 03:51:05.606418  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 48/60
	I1128 03:51:06.608041  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 49/60
	I1128 03:51:07.609921  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 50/60
	I1128 03:51:08.611395  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 51/60
	I1128 03:51:09.612915  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 52/60
	I1128 03:51:10.614250  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 53/60
	I1128 03:51:11.615518  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 54/60
	I1128 03:51:12.617443  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 55/60
	I1128 03:51:13.618782  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 56/60
	I1128 03:51:14.620124  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 57/60
	I1128 03:51:15.621423  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 58/60
	I1128 03:51:16.623300  383866 main.go:141] libmachine: (old-k8s-version-666657) Waiting for machine to stop 59/60
	I1128 03:51:17.624029  383866 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 03:51:17.624093  383866 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 03:51:17.626369  383866 out.go:177] 
	W1128 03:51:17.628124  383866 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1128 03:51:17.628153  383866 out.go:239] * 
	* 
	W1128 03:51:17.632376  383866 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 03:51:17.634059  383866 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-666657 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666657 -n old-k8s-version-666657
E1128 03:51:17.838883  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:51:17.844241  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:51:17.854573  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:51:17.874870  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:51:17.915211  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:51:17.995655  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:51:18.156104  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:51:18.477247  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:51:19.117857  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:51:20.399025  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:51:22.959698  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:51:23.483976  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:51:28.080760  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:51:32.181761  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666657 -n old-k8s-version-666657: exit status 3 (18.584143194s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:51:36.221262  384556 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host
	E1128 03:51:36.221291  384556 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-666657" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (140.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (139.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-644411 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p newest-cni-644411 --alsologtostderr -v=3: exit status 82 (2m0.9632995s)

                                                
                                                
-- stdout --
	* Stopping node "newest-cni-644411"  ...
	* Stopping node "newest-cni-644411"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 03:49:31.604295  383982 out.go:296] Setting OutFile to fd 1 ...
	I1128 03:49:31.604668  383982 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:49:31.604685  383982 out.go:309] Setting ErrFile to fd 2...
	I1128 03:49:31.604694  383982 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:49:31.604954  383982 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 03:49:31.605269  383982 out.go:303] Setting JSON to false
	I1128 03:49:31.605381  383982 mustload.go:65] Loading cluster: newest-cni-644411
	I1128 03:49:31.605761  383982 config.go:182] Loaded profile config "newest-cni-644411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 03:49:31.605838  383982 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/newest-cni-644411/config.json ...
	I1128 03:49:31.606418  383982 mustload.go:65] Loading cluster: newest-cni-644411
	I1128 03:49:31.606541  383982 config.go:182] Loaded profile config "newest-cni-644411": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 03:49:31.606586  383982 stop.go:39] StopHost: newest-cni-644411
	I1128 03:49:31.607116  383982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:49:31.607189  383982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:49:31.623761  383982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40673
	I1128 03:49:31.624314  383982 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:49:31.625083  383982 main.go:141] libmachine: Using API Version  1
	I1128 03:49:31.625116  383982 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:49:31.625474  383982 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:49:31.627818  383982 out.go:177] * Stopping node "newest-cni-644411"  ...
	I1128 03:49:31.629881  383982 main.go:141] libmachine: Stopping "newest-cni-644411"...
	I1128 03:49:31.629922  383982 main.go:141] libmachine: (newest-cni-644411) Calling .GetState
	I1128 03:49:31.631861  383982 main.go:141] libmachine: (newest-cni-644411) Calling .Stop
	I1128 03:49:31.636226  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 0/60
	I1128 03:49:32.638110  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 1/60
	I1128 03:49:33.639550  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 2/60
	I1128 03:49:34.640845  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 3/60
	I1128 03:49:35.643183  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 4/60
	I1128 03:49:36.645201  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 5/60
	I1128 03:49:37.647748  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 6/60
	I1128 03:49:38.649176  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 7/60
	I1128 03:49:39.650962  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 8/60
	I1128 03:49:40.652219  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 9/60
	I1128 03:49:41.654706  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 10/60
	I1128 03:49:42.656293  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 11/60
	I1128 03:49:43.657946  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 12/60
	I1128 03:49:44.660080  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 13/60
	I1128 03:49:45.661800  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 14/60
	I1128 03:49:46.664096  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 15/60
	I1128 03:49:47.665604  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 16/60
	I1128 03:49:48.667269  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 17/60
	I1128 03:49:49.668831  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 18/60
	I1128 03:49:50.670360  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 19/60
	I1128 03:49:51.672362  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 20/60
	I1128 03:49:52.674000  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 21/60
	I1128 03:49:53.675443  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 22/60
	I1128 03:49:54.677305  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 23/60
	I1128 03:49:55.678975  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 24/60
	I1128 03:49:56.680705  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 25/60
	I1128 03:49:57.682107  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 26/60
	I1128 03:49:58.683778  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 27/60
	I1128 03:49:59.685325  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 28/60
	I1128 03:50:00.686662  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 29/60
	I1128 03:50:01.689052  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 30/60
	I1128 03:50:02.691523  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 31/60
	I1128 03:50:03.693147  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 32/60
	I1128 03:50:04.695564  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 33/60
	I1128 03:50:05.697175  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 34/60
	I1128 03:50:06.699356  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 35/60
	I1128 03:50:07.700724  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 36/60
	I1128 03:50:08.702349  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 37/60
	I1128 03:50:09.703722  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 38/60
	I1128 03:50:10.705107  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 39/60
	I1128 03:50:11.707677  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 40/60
	I1128 03:50:12.709338  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 41/60
	I1128 03:50:13.710942  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 42/60
	I1128 03:50:14.712628  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 43/60
	I1128 03:50:15.714255  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 44/60
	I1128 03:50:16.716443  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 45/60
	I1128 03:50:17.717758  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 46/60
	I1128 03:50:18.719224  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 47/60
	I1128 03:50:19.720660  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 48/60
	I1128 03:50:20.722063  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 49/60
	I1128 03:50:21.724636  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 50/60
	I1128 03:50:22.726138  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 51/60
	I1128 03:50:23.727672  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 52/60
	I1128 03:50:24.729182  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 53/60
	I1128 03:50:25.730833  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 54/60
	I1128 03:50:26.732604  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 55/60
	I1128 03:50:27.734059  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 56/60
	I1128 03:50:28.735335  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 57/60
	I1128 03:50:29.737035  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 58/60
	I1128 03:50:30.738560  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 59/60
	I1128 03:50:31.739666  383982 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 03:50:31.739732  383982 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 03:50:31.739753  383982 retry.go:31] will retry after 626.644088ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 03:50:32.366555  383982 stop.go:39] StopHost: newest-cni-644411
	I1128 03:50:32.367009  383982 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:50:32.367058  383982 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:50:32.381881  383982 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42385
	I1128 03:50:32.382426  383982 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:50:32.382969  383982 main.go:141] libmachine: Using API Version  1
	I1128 03:50:32.383008  383982 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:50:32.383404  383982 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:50:32.385410  383982 out.go:177] * Stopping node "newest-cni-644411"  ...
	I1128 03:50:32.386811  383982 main.go:141] libmachine: Stopping "newest-cni-644411"...
	I1128 03:50:32.386826  383982 main.go:141] libmachine: (newest-cni-644411) Calling .GetState
	I1128 03:50:32.388653  383982 main.go:141] libmachine: (newest-cni-644411) Calling .Stop
	I1128 03:50:32.392408  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 0/60
	I1128 03:50:33.394293  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 1/60
	I1128 03:50:34.395785  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 2/60
	I1128 03:50:35.397297  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 3/60
	I1128 03:50:36.398756  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 4/60
	I1128 03:50:37.400361  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 5/60
	I1128 03:50:38.402015  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 6/60
	I1128 03:50:39.403533  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 7/60
	I1128 03:50:40.405166  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 8/60
	I1128 03:50:41.406546  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 9/60
	I1128 03:50:42.408491  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 10/60
	I1128 03:50:43.409867  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 11/60
	I1128 03:50:44.411427  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 12/60
	I1128 03:50:45.412861  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 13/60
	I1128 03:50:46.414363  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 14/60
	I1128 03:50:47.416090  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 15/60
	I1128 03:50:48.417661  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 16/60
	I1128 03:50:49.419120  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 17/60
	I1128 03:50:50.420594  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 18/60
	I1128 03:50:51.422093  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 19/60
	I1128 03:50:52.423872  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 20/60
	I1128 03:50:53.425407  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 21/60
	I1128 03:50:54.426873  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 22/60
	I1128 03:50:55.428458  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 23/60
	I1128 03:50:56.429870  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 24/60
	I1128 03:50:57.431228  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 25/60
	I1128 03:50:58.432930  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 26/60
	I1128 03:50:59.434205  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 27/60
	I1128 03:51:00.435895  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 28/60
	I1128 03:51:01.437212  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 29/60
	I1128 03:51:02.439054  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 30/60
	I1128 03:51:03.440431  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 31/60
	I1128 03:51:04.442000  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 32/60
	I1128 03:51:05.443401  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 33/60
	I1128 03:51:06.444814  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 34/60
	I1128 03:51:07.446588  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 35/60
	I1128 03:51:08.448090  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 36/60
	I1128 03:51:09.449877  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 37/60
	I1128 03:51:10.451499  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 38/60
	I1128 03:51:11.452978  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 39/60
	I1128 03:51:12.454670  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 40/60
	I1128 03:51:13.456062  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 41/60
	I1128 03:51:14.457570  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 42/60
	I1128 03:51:15.458946  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 43/60
	I1128 03:51:16.460676  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 44/60
	I1128 03:51:17.462288  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 45/60
	I1128 03:51:18.464969  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 46/60
	I1128 03:51:19.466544  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 47/60
	I1128 03:51:20.468262  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 48/60
	I1128 03:51:21.469982  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 49/60
	I1128 03:51:22.471830  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 50/60
	I1128 03:51:23.473597  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 51/60
	I1128 03:51:24.475115  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 52/60
	I1128 03:51:25.476413  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 53/60
	I1128 03:51:26.477963  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 54/60
	I1128 03:51:27.479560  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 55/60
	I1128 03:51:28.481155  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 56/60
	I1128 03:51:29.482419  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 57/60
	I1128 03:51:30.484051  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 58/60
	I1128 03:51:31.485233  383982 main.go:141] libmachine: (newest-cni-644411) Waiting for machine to stop 59/60
	I1128 03:51:32.486199  383982 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 03:51:32.486254  383982 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 03:51:32.488179  383982 out.go:177] 
	W1128 03:51:32.489827  383982 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1128 03:51:32.489853  383982 out.go:239] * 
	* 
	W1128 03:51:32.492758  383982 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 03:51:32.494328  383982 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p newest-cni-644411 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-644411 -n newest-cni-644411
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-644411 -n newest-cni-644411: exit status 3 (18.573565439s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:51:51.069312  384623 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.227:22: connect: no route to host
	E1128 03:51:51.069334  384623 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.227:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-644411" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/Stop (139.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-222348 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-222348 --alsologtostderr -v=3: exit status 82 (2m1.000597838s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-222348"  ...
	* Stopping node "no-preload-222348"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 03:49:46.721497  384139 out.go:296] Setting OutFile to fd 1 ...
	I1128 03:49:46.721763  384139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:49:46.721772  384139 out.go:309] Setting ErrFile to fd 2...
	I1128 03:49:46.721777  384139 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:49:46.721982  384139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 03:49:46.722227  384139 out.go:303] Setting JSON to false
	I1128 03:49:46.722326  384139 mustload.go:65] Loading cluster: no-preload-222348
	I1128 03:49:46.722643  384139 config.go:182] Loaded profile config "no-preload-222348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 03:49:46.722711  384139 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/no-preload-222348/config.json ...
	I1128 03:49:46.722870  384139 mustload.go:65] Loading cluster: no-preload-222348
	I1128 03:49:46.722985  384139 config.go:182] Loaded profile config "no-preload-222348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 03:49:46.723028  384139 stop.go:39] StopHost: no-preload-222348
	I1128 03:49:46.723387  384139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:49:46.723435  384139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:49:46.741729  384139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43737
	I1128 03:49:46.742415  384139 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:49:46.743002  384139 main.go:141] libmachine: Using API Version  1
	I1128 03:49:46.743021  384139 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:49:46.743375  384139 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:49:46.745557  384139 out.go:177] * Stopping node "no-preload-222348"  ...
	I1128 03:49:46.747462  384139 main.go:141] libmachine: Stopping "no-preload-222348"...
	I1128 03:49:46.747484  384139 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 03:49:46.749146  384139 main.go:141] libmachine: (no-preload-222348) Calling .Stop
	I1128 03:49:46.752358  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 0/60
	I1128 03:49:47.754352  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 1/60
	I1128 03:49:48.755785  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 2/60
	I1128 03:49:49.757093  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 3/60
	I1128 03:49:50.759492  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 4/60
	I1128 03:49:51.761538  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 5/60
	I1128 03:49:52.762877  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 6/60
	I1128 03:49:53.765016  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 7/60
	I1128 03:49:54.766643  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 8/60
	I1128 03:49:55.768088  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 9/60
	I1128 03:49:56.770254  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 10/60
	I1128 03:49:57.771482  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 11/60
	I1128 03:49:58.772816  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 12/60
	I1128 03:49:59.774178  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 13/60
	I1128 03:50:00.775582  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 14/60
	I1128 03:50:01.777681  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 15/60
	I1128 03:50:02.779120  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 16/60
	I1128 03:50:03.780724  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 17/60
	I1128 03:50:04.781989  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 18/60
	I1128 03:50:05.783740  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 19/60
	I1128 03:50:06.785723  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 20/60
	I1128 03:50:07.787054  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 21/60
	I1128 03:50:08.788448  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 22/60
	I1128 03:50:09.789691  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 23/60
	I1128 03:50:10.791112  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 24/60
	I1128 03:50:11.793177  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 25/60
	I1128 03:50:12.794553  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 26/60
	I1128 03:50:13.795943  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 27/60
	I1128 03:50:14.797488  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 28/60
	I1128 03:50:15.798894  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 29/60
	I1128 03:50:16.800979  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 30/60
	I1128 03:50:17.802424  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 31/60
	I1128 03:50:18.803792  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 32/60
	I1128 03:50:19.805324  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 33/60
	I1128 03:50:20.806679  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 34/60
	I1128 03:50:21.808702  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 35/60
	I1128 03:50:22.810229  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 36/60
	I1128 03:50:23.811559  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 37/60
	I1128 03:50:24.813054  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 38/60
	I1128 03:50:25.814370  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 39/60
	I1128 03:50:26.816539  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 40/60
	I1128 03:50:27.817849  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 41/60
	I1128 03:50:28.819170  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 42/60
	I1128 03:50:29.820723  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 43/60
	I1128 03:50:30.822192  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 44/60
	I1128 03:50:31.823885  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 45/60
	I1128 03:50:32.825422  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 46/60
	I1128 03:50:33.827363  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 47/60
	I1128 03:50:34.828738  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 48/60
	I1128 03:50:35.830210  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 49/60
	I1128 03:50:36.832668  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 50/60
	I1128 03:50:37.834077  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 51/60
	I1128 03:50:38.835609  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 52/60
	I1128 03:50:39.836974  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 53/60
	I1128 03:50:40.838635  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 54/60
	I1128 03:50:41.840676  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 55/60
	I1128 03:50:42.841952  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 56/60
	I1128 03:50:43.843557  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 57/60
	I1128 03:50:44.844939  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 58/60
	I1128 03:50:45.846417  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 59/60
	I1128 03:50:46.847250  384139 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 03:50:46.847315  384139 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 03:50:46.847341  384139 retry.go:31] will retry after 679.500347ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 03:50:47.527202  384139 stop.go:39] StopHost: no-preload-222348
	I1128 03:50:47.527699  384139 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:50:47.527766  384139 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:50:47.542492  384139 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45205
	I1128 03:50:47.542961  384139 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:50:47.543422  384139 main.go:141] libmachine: Using API Version  1
	I1128 03:50:47.543452  384139 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:50:47.543813  384139 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:50:47.546064  384139 out.go:177] * Stopping node "no-preload-222348"  ...
	I1128 03:50:47.547741  384139 main.go:141] libmachine: Stopping "no-preload-222348"...
	I1128 03:50:47.547762  384139 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 03:50:47.549353  384139 main.go:141] libmachine: (no-preload-222348) Calling .Stop
	I1128 03:50:47.552909  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 0/60
	I1128 03:50:48.554237  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 1/60
	I1128 03:50:49.555744  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 2/60
	I1128 03:50:50.557398  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 3/60
	I1128 03:50:51.558762  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 4/60
	I1128 03:50:52.560328  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 5/60
	I1128 03:50:53.561770  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 6/60
	I1128 03:50:54.563122  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 7/60
	I1128 03:50:55.564733  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 8/60
	I1128 03:50:56.566263  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 9/60
	I1128 03:50:57.568204  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 10/60
	I1128 03:50:58.569563  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 11/60
	I1128 03:50:59.571093  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 12/60
	I1128 03:51:00.572707  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 13/60
	I1128 03:51:01.574377  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 14/60
	I1128 03:51:02.575763  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 15/60
	I1128 03:51:03.577257  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 16/60
	I1128 03:51:04.578777  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 17/60
	I1128 03:51:05.580366  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 18/60
	I1128 03:51:06.581808  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 19/60
	I1128 03:51:07.583550  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 20/60
	I1128 03:51:08.585080  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 21/60
	I1128 03:51:09.586441  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 22/60
	I1128 03:51:10.587837  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 23/60
	I1128 03:51:11.589099  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 24/60
	I1128 03:51:12.590750  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 25/60
	I1128 03:51:13.592317  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 26/60
	I1128 03:51:14.593925  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 27/60
	I1128 03:51:15.595295  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 28/60
	I1128 03:51:16.597042  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 29/60
	I1128 03:51:17.599143  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 30/60
	I1128 03:51:18.600441  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 31/60
	I1128 03:51:19.602007  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 32/60
	I1128 03:51:20.603333  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 33/60
	I1128 03:51:21.605037  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 34/60
	I1128 03:51:22.606929  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 35/60
	I1128 03:51:23.608429  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 36/60
	I1128 03:51:24.609973  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 37/60
	I1128 03:51:25.611302  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 38/60
	I1128 03:51:26.612805  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 39/60
	I1128 03:51:27.614694  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 40/60
	I1128 03:51:28.615992  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 41/60
	I1128 03:51:29.617749  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 42/60
	I1128 03:51:30.619254  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 43/60
	I1128 03:51:31.620923  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 44/60
	I1128 03:51:32.622132  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 45/60
	I1128 03:51:33.623642  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 46/60
	I1128 03:51:34.625164  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 47/60
	I1128 03:51:35.627408  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 48/60
	I1128 03:51:36.629272  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 49/60
	I1128 03:51:37.631127  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 50/60
	I1128 03:51:38.632547  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 51/60
	I1128 03:51:39.633967  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 52/60
	I1128 03:51:40.635288  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 53/60
	I1128 03:51:41.636771  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 54/60
	I1128 03:51:42.638417  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 55/60
	I1128 03:51:43.639843  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 56/60
	I1128 03:51:44.641356  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 57/60
	I1128 03:51:45.643188  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 58/60
	I1128 03:51:46.644720  384139 main.go:141] libmachine: (no-preload-222348) Waiting for machine to stop 59/60
	I1128 03:51:47.645951  384139 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 03:51:47.646010  384139 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 03:51:47.648134  384139 out.go:177] 
	W1128 03:51:47.649489  384139 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1128 03:51:47.649505  384139 out.go:239] * 
	* 
	W1128 03:51:47.652566  384139 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 03:51:47.653970  384139 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-222348 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-222348 -n no-preload-222348
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-222348 -n no-preload-222348: exit status 3 (18.517840719s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:52:06.173235  384763 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E1128 03:52:06.173259  384763 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-222348" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (140.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-725962 --alsologtostderr -v=3
E1128 03:49:59.770444  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:49:59.987699  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 03:50:10.257713  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:50:10.262987  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:50:10.273287  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:50:10.293593  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:50:10.333954  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:50:10.414307  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:50:10.574771  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:50:10.895513  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:50:11.535861  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:50:12.816964  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:50:15.377545  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:50:20.498632  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:50:30.739808  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:50:40.731596  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:50:40.947983  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 03:50:51.220764  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:51:06.531309  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-725962 --alsologtostderr -v=3: exit status 82 (2m1.760201859s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-725962"  ...
	* Stopping node "default-k8s-diff-port-725962"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 03:49:52.685858  384240 out.go:296] Setting OutFile to fd 1 ...
	I1128 03:49:52.686148  384240 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:49:52.686159  384240 out.go:309] Setting ErrFile to fd 2...
	I1128 03:49:52.686164  384240 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:49:52.686368  384240 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 03:49:52.686666  384240 out.go:303] Setting JSON to false
	I1128 03:49:52.686747  384240 mustload.go:65] Loading cluster: default-k8s-diff-port-725962
	I1128 03:49:52.687123  384240 config.go:182] Loaded profile config "default-k8s-diff-port-725962": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:49:52.687199  384240 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/default-k8s-diff-port-725962/config.json ...
	I1128 03:49:52.687379  384240 mustload.go:65] Loading cluster: default-k8s-diff-port-725962
	I1128 03:49:52.687484  384240 config.go:182] Loaded profile config "default-k8s-diff-port-725962": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:49:52.687509  384240 stop.go:39] StopHost: default-k8s-diff-port-725962
	I1128 03:49:52.687968  384240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:49:52.688024  384240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:49:52.703524  384240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37635
	I1128 03:49:52.704110  384240 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:49:52.704774  384240 main.go:141] libmachine: Using API Version  1
	I1128 03:49:52.704801  384240 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:49:52.705157  384240 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:49:52.707993  384240 out.go:177] * Stopping node "default-k8s-diff-port-725962"  ...
	I1128 03:49:52.709576  384240 main.go:141] libmachine: Stopping "default-k8s-diff-port-725962"...
	I1128 03:49:52.709609  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Calling .GetState
	I1128 03:49:52.710990  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Calling .Stop
	I1128 03:49:52.714310  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 0/60
	I1128 03:49:53.715914  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 1/60
	I1128 03:49:54.717184  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 2/60
	I1128 03:49:55.718529  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 3/60
	I1128 03:49:56.720130  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 4/60
	I1128 03:49:57.722091  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 5/60
	I1128 03:49:58.723546  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 6/60
	I1128 03:49:59.725027  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 7/60
	I1128 03:50:00.726368  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 8/60
	I1128 03:50:01.727837  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 9/60
	I1128 03:50:02.730418  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 10/60
	I1128 03:50:03.731812  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 11/60
	I1128 03:50:04.733278  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 12/60
	I1128 03:50:05.734810  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 13/60
	I1128 03:50:06.736162  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 14/60
	I1128 03:50:07.738395  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 15/60
	I1128 03:50:08.739751  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 16/60
	I1128 03:50:09.741134  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 17/60
	I1128 03:50:10.742591  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 18/60
	I1128 03:50:11.744024  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 19/60
	I1128 03:50:12.746598  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 20/60
	I1128 03:50:13.747985  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 21/60
	I1128 03:50:14.749366  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 22/60
	I1128 03:50:15.751630  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 23/60
	I1128 03:50:16.752827  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 24/60
	I1128 03:50:17.754708  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 25/60
	I1128 03:50:18.755994  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 26/60
	I1128 03:50:19.757488  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 27/60
	I1128 03:50:20.759322  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 28/60
	I1128 03:50:21.760709  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 29/60
	I1128 03:50:22.762830  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 30/60
	I1128 03:50:23.764135  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 31/60
	I1128 03:50:24.765800  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 32/60
	I1128 03:50:25.767208  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 33/60
	I1128 03:50:26.768526  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 34/60
	I1128 03:50:27.770595  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 35/60
	I1128 03:50:28.771816  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 36/60
	I1128 03:50:29.773295  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 37/60
	I1128 03:50:30.774547  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 38/60
	I1128 03:50:31.775820  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 39/60
	I1128 03:50:32.778110  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 40/60
	I1128 03:50:33.779872  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 41/60
	I1128 03:50:34.781252  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 42/60
	I1128 03:50:35.782954  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 43/60
	I1128 03:50:36.784562  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 44/60
	I1128 03:50:37.786601  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 45/60
	I1128 03:50:38.788166  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 46/60
	I1128 03:50:39.789601  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 47/60
	I1128 03:50:40.790983  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 48/60
	I1128 03:50:41.792302  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 49/60
	I1128 03:50:42.794588  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 50/60
	I1128 03:50:43.796086  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 51/60
	I1128 03:50:44.797992  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 52/60
	I1128 03:50:45.799371  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 53/60
	I1128 03:50:46.800846  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 54/60
	I1128 03:50:47.802898  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 55/60
	I1128 03:50:48.804331  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 56/60
	I1128 03:50:49.805732  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 57/60
	I1128 03:50:50.807014  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 58/60
	I1128 03:50:51.808470  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 59/60
	I1128 03:50:52.809966  384240 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 03:50:52.810032  384240 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 03:50:52.810061  384240 retry.go:31] will retry after 1.443141675s: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 03:50:54.254625  384240 stop.go:39] StopHost: default-k8s-diff-port-725962
	I1128 03:50:54.255044  384240 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:50:54.255113  384240 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:50:54.270353  384240 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37127
	I1128 03:50:54.270823  384240 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:50:54.271318  384240 main.go:141] libmachine: Using API Version  1
	I1128 03:50:54.271344  384240 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:50:54.271653  384240 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:50:54.273722  384240 out.go:177] * Stopping node "default-k8s-diff-port-725962"  ...
	I1128 03:50:54.274998  384240 main.go:141] libmachine: Stopping "default-k8s-diff-port-725962"...
	I1128 03:50:54.275022  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Calling .GetState
	I1128 03:50:54.276767  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Calling .Stop
	I1128 03:50:54.280342  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 0/60
	I1128 03:50:55.281816  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 1/60
	I1128 03:50:56.283324  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 2/60
	I1128 03:50:57.284802  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 3/60
	I1128 03:50:58.286373  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 4/60
	I1128 03:50:59.288345  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 5/60
	I1128 03:51:00.289838  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 6/60
	I1128 03:51:01.291197  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 7/60
	I1128 03:51:02.292776  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 8/60
	I1128 03:51:03.294232  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 9/60
	I1128 03:51:04.296342  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 10/60
	I1128 03:51:05.297877  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 11/60
	I1128 03:51:06.299275  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 12/60
	I1128 03:51:07.300908  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 13/60
	I1128 03:51:08.302337  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 14/60
	I1128 03:51:09.304296  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 15/60
	I1128 03:51:10.305625  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 16/60
	I1128 03:51:11.307040  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 17/60
	I1128 03:51:12.308434  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 18/60
	I1128 03:51:13.309844  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 19/60
	I1128 03:51:14.311868  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 20/60
	I1128 03:51:15.313432  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 21/60
	I1128 03:51:16.315064  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 22/60
	I1128 03:51:17.316515  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 23/60
	I1128 03:51:18.318839  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 24/60
	I1128 03:51:19.321147  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 25/60
	I1128 03:51:20.323616  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 26/60
	I1128 03:51:21.325127  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 27/60
	I1128 03:51:22.327551  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 28/60
	I1128 03:51:23.329726  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 29/60
	I1128 03:51:24.331680  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 30/60
	I1128 03:51:25.333045  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 31/60
	I1128 03:51:26.334455  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 32/60
	I1128 03:51:27.335828  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 33/60
	I1128 03:51:28.337274  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 34/60
	I1128 03:51:29.338862  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 35/60
	I1128 03:51:30.340119  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 36/60
	I1128 03:51:31.341694  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 37/60
	I1128 03:51:32.343276  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 38/60
	I1128 03:51:33.344715  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 39/60
	I1128 03:51:34.346813  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 40/60
	I1128 03:51:35.348226  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 41/60
	I1128 03:51:36.349318  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 42/60
	I1128 03:51:37.350775  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 43/60
	I1128 03:51:38.352421  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 44/60
	I1128 03:51:39.354503  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 45/60
	I1128 03:51:40.355820  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 46/60
	I1128 03:51:41.357308  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 47/60
	I1128 03:51:42.358682  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 48/60
	I1128 03:51:43.360234  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 49/60
	I1128 03:51:44.361946  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 50/60
	I1128 03:51:45.363450  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 51/60
	I1128 03:51:46.364772  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 52/60
	I1128 03:51:47.366109  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 53/60
	I1128 03:51:48.367570  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 54/60
	I1128 03:51:49.369618  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 55/60
	I1128 03:51:50.371050  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 56/60
	I1128 03:51:51.372613  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 57/60
	I1128 03:51:52.374092  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 58/60
	I1128 03:51:53.375609  384240 main.go:141] libmachine: (default-k8s-diff-port-725962) Waiting for machine to stop 59/60
	I1128 03:51:54.376332  384240 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 03:51:54.376390  384240 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 03:51:54.378214  384240 out.go:177] 
	W1128 03:51:54.379828  384240 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1128 03:51:54.379847  384240 out.go:239] * 
	* 
	W1128 03:51:54.382746  384240 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 03:51:54.384212  384240 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-725962 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-725962 -n default-k8s-diff-port-725962
E1128 03:51:58.568627  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 03:51:58.573939  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 03:51:58.584177  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 03:51:58.604479  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 03:51:58.644823  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 03:51:58.725206  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 03:51:58.801453  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:51:58.885827  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 03:51:59.206560  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 03:51:59.847587  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-725962 -n default-k8s-diff-port-725962: exit status 3 (18.44303191s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:52:12.829216  384893 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.13:22: connect: no route to host
	E1128 03:52:12.829240  384893 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.13:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-725962" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (140.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666657 -n old-k8s-version-666657
E1128 03:51:38.321236  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666657 -n old-k8s-version-666657: exit status 3 (3.199277669s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:51:39.421268  384664 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host
	E1128 03:51:39.421290  384664 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-666657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-666657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154400337s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-666657 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666657 -n old-k8s-version-666657
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666657 -n old-k8s-version-666657: exit status 3 (3.061563965s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:51:48.637303  384733 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host
	E1128 03:51:48.637329  384733 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.7:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-666657" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-644411 -n newest-cni-644411
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-644411 -n newest-cni-644411: exit status 3 (3.199630862s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:51:54.269229  384845 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.227:22: connect: no route to host
	E1128 03:51:54.269254  384845 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.227:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-644411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p newest-cni-644411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15398508s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.227:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p newest-cni-644411 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-644411 -n newest-cni-644411
E1128 03:52:01.128741  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 03:52:02.652442  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:52:02.869198  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-644411 -n newest-cni-644411: exit status 3 (3.062052479s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:52:03.485275  384945 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.227:22: connect: no route to host
	E1128 03:52:03.485294  384945 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.227:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "newest-cni-644411" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-222348 -n no-preload-222348
E1128 03:52:06.220371  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 03:52:06.540976  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 03:52:07.182003  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 03:52:08.462827  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 03:52:08.809786  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-222348 -n no-preload-222348: exit status 3 (3.1992882s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:52:09.373274  385020 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E1128 03:52:09.373296  385020 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-222348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1128 03:52:11.023887  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-222348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153736488s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-222348 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-222348 -n no-preload-222348
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-222348 -n no-preload-222348: exit status 3 (3.062151512s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:52:18.589199  385119 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E1128 03:52:18.589220  385119 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-222348" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-725962 -n default-k8s-diff-port-725962
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-725962 -n default-k8s-diff-port-725962: exit status 3 (3.199976305s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:52:16.029243  385078 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.13:22: connect: no route to host
	E1128 03:52:16.029267  385078 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.13:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-725962 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1128 03:52:16.144753  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-725962 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153588873s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.13:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-725962 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-725962 -n default-k8s-diff-port-725962
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-725962 -n default-k8s-diff-port-725962: exit status 3 (3.062135858s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 03:52:25.245231  385236 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.13:22: connect: no route to host
	E1128 03:52:25.245253  385236 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.13:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-725962" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-672176 --alsologtostderr -v=3
E1128 04:00:06.724413  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 04:00:10.257625  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 04:01:17.839271  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 04:01:23.484290  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-672176 --alsologtostderr -v=3: exit status 82 (2m0.855216391s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-672176"  ...
	* Stopping node "embed-certs-672176"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 03:59:48.164845  387744 out.go:296] Setting OutFile to fd 1 ...
	I1128 03:59:48.165188  387744 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:59:48.165204  387744 out.go:309] Setting ErrFile to fd 2...
	I1128 03:59:48.165211  387744 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:59:48.165431  387744 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 03:59:48.165683  387744 out.go:303] Setting JSON to false
	I1128 03:59:48.165795  387744 mustload.go:65] Loading cluster: embed-certs-672176
	I1128 03:59:48.166217  387744 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:59:48.166316  387744 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 03:59:48.166526  387744 mustload.go:65] Loading cluster: embed-certs-672176
	I1128 03:59:48.166646  387744 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:59:48.166671  387744 stop.go:39] StopHost: embed-certs-672176
	I1128 03:59:48.167150  387744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:59:48.167222  387744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:59:48.182656  387744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43099
	I1128 03:59:48.183168  387744 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:59:48.183868  387744 main.go:141] libmachine: Using API Version  1
	I1128 03:59:48.183896  387744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:59:48.184251  387744 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:59:48.186956  387744 out.go:177] * Stopping node "embed-certs-672176"  ...
	I1128 03:59:48.188814  387744 main.go:141] libmachine: Stopping "embed-certs-672176"...
	I1128 03:59:48.188840  387744 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 03:59:48.190660  387744 main.go:141] libmachine: (embed-certs-672176) Calling .Stop
	I1128 03:59:48.194952  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 0/60
	I1128 03:59:49.196639  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 1/60
	I1128 03:59:50.198228  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 2/60
	I1128 03:59:51.200520  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 3/60
	I1128 03:59:52.202026  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 4/60
	I1128 03:59:53.204347  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 5/60
	I1128 03:59:54.206110  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 6/60
	I1128 03:59:55.207404  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 7/60
	I1128 03:59:56.208761  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 8/60
	I1128 03:59:57.210248  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 9/60
	I1128 03:59:58.211551  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 10/60
	I1128 03:59:59.213607  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 11/60
	I1128 04:00:00.214892  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 12/60
	I1128 04:00:01.216144  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 13/60
	I1128 04:00:02.218004  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 14/60
	I1128 04:00:03.220235  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 15/60
	I1128 04:00:04.221459  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 16/60
	I1128 04:00:05.223449  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 17/60
	I1128 04:00:06.225113  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 18/60
	I1128 04:00:07.227510  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 19/60
	I1128 04:00:08.229193  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 20/60
	I1128 04:00:09.230839  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 21/60
	I1128 04:00:10.232218  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 22/60
	I1128 04:00:11.233841  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 23/60
	I1128 04:00:12.235178  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 24/60
	I1128 04:00:13.237009  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 25/60
	I1128 04:00:14.238598  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 26/60
	I1128 04:00:15.240864  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 27/60
	I1128 04:00:16.242654  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 28/60
	I1128 04:00:17.243927  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 29/60
	I1128 04:00:18.245476  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 30/60
	I1128 04:00:19.247078  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 31/60
	I1128 04:00:20.248424  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 32/60
	I1128 04:00:21.249782  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 33/60
	I1128 04:00:22.251455  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 34/60
	I1128 04:00:23.253463  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 35/60
	I1128 04:00:24.255403  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 36/60
	I1128 04:00:25.257683  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 37/60
	I1128 04:00:26.259061  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 38/60
	I1128 04:00:27.261432  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 39/60
	I1128 04:00:28.263428  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 40/60
	I1128 04:00:29.264990  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 41/60
	I1128 04:00:30.266330  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 42/60
	I1128 04:00:31.268271  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 43/60
	I1128 04:00:32.269662  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 44/60
	I1128 04:00:33.271529  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 45/60
	I1128 04:00:34.272791  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 46/60
	I1128 04:00:35.274082  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 47/60
	I1128 04:00:36.275657  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 48/60
	I1128 04:00:37.277999  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 49/60
	I1128 04:00:38.280263  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 50/60
	I1128 04:00:39.281864  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 51/60
	I1128 04:00:40.283463  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 52/60
	I1128 04:00:41.285014  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 53/60
	I1128 04:00:42.286875  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 54/60
	I1128 04:00:43.289113  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 55/60
	I1128 04:00:44.291632  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 56/60
	I1128 04:00:45.293274  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 57/60
	I1128 04:00:46.295598  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 58/60
	I1128 04:00:47.297086  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 59/60
	I1128 04:00:48.298354  387744 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 04:00:48.298426  387744 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 04:00:48.298448  387744 retry.go:31] will retry after 525.73874ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 04:00:48.825163  387744 stop.go:39] StopHost: embed-certs-672176
	I1128 04:00:48.825532  387744 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:00:48.825581  387744 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:00:48.841756  387744 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39773
	I1128 04:00:48.842273  387744 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:00:48.842871  387744 main.go:141] libmachine: Using API Version  1
	I1128 04:00:48.842906  387744 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:00:48.843324  387744 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:00:48.845271  387744 out.go:177] * Stopping node "embed-certs-672176"  ...
	I1128 04:00:48.846602  387744 main.go:141] libmachine: Stopping "embed-certs-672176"...
	I1128 04:00:48.846617  387744 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:00:48.848308  387744 main.go:141] libmachine: (embed-certs-672176) Calling .Stop
	I1128 04:00:48.851390  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 0/60
	I1128 04:00:49.853142  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 1/60
	I1128 04:00:50.855210  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 2/60
	I1128 04:00:51.856638  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 3/60
	I1128 04:00:52.858155  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 4/60
	I1128 04:00:53.859754  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 5/60
	I1128 04:00:54.861201  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 6/60
	I1128 04:00:55.862712  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 7/60
	I1128 04:00:56.864696  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 8/60
	I1128 04:00:57.866923  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 9/60
	I1128 04:00:58.868742  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 10/60
	I1128 04:00:59.870600  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 11/60
	I1128 04:01:00.871960  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 12/60
	I1128 04:01:01.873891  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 13/60
	I1128 04:01:02.876162  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 14/60
	I1128 04:01:03.878034  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 15/60
	I1128 04:01:04.879543  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 16/60
	I1128 04:01:05.880740  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 17/60
	I1128 04:01:06.882077  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 18/60
	I1128 04:01:07.883547  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 19/60
	I1128 04:01:08.885014  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 20/60
	I1128 04:01:09.886314  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 21/60
	I1128 04:01:10.887833  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 22/60
	I1128 04:01:11.889872  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 23/60
	I1128 04:01:12.891401  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 24/60
	I1128 04:01:13.892716  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 25/60
	I1128 04:01:14.894086  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 26/60
	I1128 04:01:15.895648  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 27/60
	I1128 04:01:16.897256  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 28/60
	I1128 04:01:17.898937  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 29/60
	I1128 04:01:18.900817  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 30/60
	I1128 04:01:19.902598  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 31/60
	I1128 04:01:20.904090  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 32/60
	I1128 04:01:21.905530  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 33/60
	I1128 04:01:22.907356  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 34/60
	I1128 04:01:23.908723  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 35/60
	I1128 04:01:24.910107  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 36/60
	I1128 04:01:25.912131  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 37/60
	I1128 04:01:26.913518  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 38/60
	I1128 04:01:27.915406  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 39/60
	I1128 04:01:28.916722  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 40/60
	I1128 04:01:29.918172  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 41/60
	I1128 04:01:30.920083  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 42/60
	I1128 04:01:31.922133  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 43/60
	I1128 04:01:32.923556  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 44/60
	I1128 04:01:33.925695  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 45/60
	I1128 04:01:34.927093  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 46/60
	I1128 04:01:35.928566  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 47/60
	I1128 04:01:36.930073  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 48/60
	I1128 04:01:37.931369  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 49/60
	I1128 04:01:38.933591  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 50/60
	I1128 04:01:39.935610  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 51/60
	I1128 04:01:40.937555  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 52/60
	I1128 04:01:41.939322  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 53/60
	I1128 04:01:42.940833  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 54/60
	I1128 04:01:43.942939  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 55/60
	I1128 04:01:44.944466  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 56/60
	I1128 04:01:45.945980  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 57/60
	I1128 04:01:46.947454  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 58/60
	I1128 04:01:47.948956  387744 main.go:141] libmachine: (embed-certs-672176) Waiting for machine to stop 59/60
	I1128 04:01:48.949549  387744 stop.go:59] stop err: unable to stop vm, current state "Running"
	W1128 04:01:48.949621  387744 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1128 04:01:48.951483  387744 out.go:177] 
	W1128 04:01:48.953120  387744 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1128 04:01:48.953148  387744 out.go:239] * 
	* 
	W1128 04:01:48.956284  387744 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1128 04:01:48.957861  387744 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-672176 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-672176 -n embed-certs-672176
E1128 04:01:58.569005  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 04:02:05.903703  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-672176 -n embed-certs-672176: exit status 3 (18.557855664s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 04:02:07.517238  388078 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.208:22: connect: no route to host
	E1128 04:02:07.517263  388078 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.208:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-672176" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-672176 -n embed-certs-672176
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-672176 -n embed-certs-672176: exit status 3 (3.203524428s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 04:02:10.721229  388153 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.208:22: connect: no route to host
	E1128 04:02:10.721248  388153 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.208:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-672176 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-672176 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.150113811s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.208:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-672176 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-672176 -n embed-certs-672176
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-672176 -n embed-certs-672176: exit status 3 (3.061515444s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1128 04:02:19.933232  388222 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.208:22: connect: no route to host
	E1128 04:02:19.933251  388222 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.208:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-672176" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-725962 -n default-k8s-diff-port-725962
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-28 04:11:25.85707759 +0000 UTC m=+5433.032051740
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-725962 -n default-k8s-diff-port-725962
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-725962 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-725962 logs -n 25: (1.365241888s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-644411             | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-222348             | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-725962  | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-666657             | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-666657                              | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC | 28 Nov 23 04:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-644411                  | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-644411 --memory=2200 --alsologtostderr   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 03:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-222348                  | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-725962       | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-644411 sudo                              | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p                                                     | disable-driver-mounts-846967 | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | disable-driver-mounts-846967                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-672176            | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC | 28 Nov 23 03:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-672176                 | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:02:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:02:20.007599  388252 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:02:20.007767  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.007777  388252 out.go:309] Setting ErrFile to fd 2...
	I1128 04:02:20.007785  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.008096  388252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 04:02:20.008843  388252 out.go:303] Setting JSON to false
	I1128 04:02:20.010310  388252 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9890,"bootTime":1701134250,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 04:02:20.010407  388252 start.go:138] virtualization: kvm guest
	I1128 04:02:20.013087  388252 out.go:177] * [embed-certs-672176] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 04:02:20.014598  388252 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:02:20.014660  388252 notify.go:220] Checking for updates...
	I1128 04:02:20.015986  388252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:02:20.017211  388252 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:20.018519  388252 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 04:02:20.019955  388252 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 04:02:20.021210  388252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:02:20.023191  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:02:20.023899  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.023964  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.042617  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
	I1128 04:02:20.043095  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.043705  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.043736  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.044107  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.044324  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.044601  388252 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:02:20.044913  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.044954  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.060572  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I1128 04:02:20.061089  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.061641  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.061662  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.062005  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.062271  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.099905  388252 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 04:02:20.101319  388252 start.go:298] selected driver: kvm2
	I1128 04:02:20.101341  388252 start.go:902] validating driver "kvm2" against &{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.101493  388252 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:02:20.102582  388252 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.102689  388252 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 04:02:20.119550  388252 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 04:02:20.120061  388252 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 04:02:20.120161  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:02:20.120182  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:20.120200  388252 start_flags.go:323] config:
	{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.120453  388252 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.122000  388252 out.go:177] * Starting control plane node embed-certs-672176 in cluster embed-certs-672176
	I1128 04:02:20.123169  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:02:20.123226  388252 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 04:02:20.123238  388252 cache.go:56] Caching tarball of preloaded images
	I1128 04:02:20.123336  388252 preload.go:174] Found /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 04:02:20.123349  388252 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 04:02:20.123483  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:02:20.123764  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:02:20.123841  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 53.317µs
	I1128 04:02:20.123861  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:02:20.123898  388252 fix.go:54] fixHost starting: 
	I1128 04:02:20.124308  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.124355  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.139372  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I1128 04:02:20.139973  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.140502  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.140524  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.141047  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.141273  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.141507  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:02:20.143177  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Running err=<nil>
	W1128 04:02:20.143200  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:02:20.144930  388252 out.go:177] * Updating the running kvm2 "embed-certs-672176" VM ...
	I1128 04:02:17.125019  385277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:17.142364  385277 api_server.go:72] duration metric: took 4m14.849353437s to wait for apiserver process to appear ...
	I1128 04:02:17.142392  385277 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:17.142425  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:17.142480  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:17.183951  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:17.183975  385277 cri.go:89] found id: ""
	I1128 04:02:17.183984  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:17.184035  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.188897  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:17.188968  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:17.224077  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:17.224105  385277 cri.go:89] found id: ""
	I1128 04:02:17.224115  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:17.224171  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.228613  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:17.228693  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:17.263866  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:17.263895  385277 cri.go:89] found id: ""
	I1128 04:02:17.263906  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:17.263973  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.268122  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:17.268187  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:17.311145  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.311176  385277 cri.go:89] found id: ""
	I1128 04:02:17.311185  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:17.311245  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.315277  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:17.315355  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:17.352737  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:17.352763  385277 cri.go:89] found id: ""
	I1128 04:02:17.352773  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:17.352839  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.357033  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:17.357117  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:17.394844  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:17.394880  385277 cri.go:89] found id: ""
	I1128 04:02:17.394892  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:17.394949  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.399309  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:17.399382  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:17.441719  385277 cri.go:89] found id: ""
	I1128 04:02:17.441755  385277 logs.go:284] 0 containers: []
	W1128 04:02:17.441763  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:17.441769  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:17.441821  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:17.485353  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:17.485378  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:17.485383  385277 cri.go:89] found id: ""
	I1128 04:02:17.485391  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:17.485445  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.489781  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.493710  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:17.493734  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:17.552558  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:17.552596  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:17.570454  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:17.570484  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.617817  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:17.617855  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:18.071032  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:18.071076  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:18.188437  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:18.188477  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:18.246729  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:18.246777  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:18.287299  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:18.287345  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:18.324855  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:18.324903  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:18.378328  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:18.378370  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:18.421332  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:18.421375  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:18.467856  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:18.467905  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:18.528763  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:18.528817  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:19.035039  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:21.037085  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:23.535684  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:20.146477  388252 machine.go:88] provisioning docker machine ...
	I1128 04:02:20.146512  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.146758  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.146926  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:02:20.146949  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.147164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:02:20.150346  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.150885  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 04:58:10 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:02:20.150920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.151194  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:02:20.151404  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151768  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:02:20.151998  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:02:20.152482  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:02:20.152501  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:02:23.005224  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:21.087291  385277 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8444/healthz ...
	I1128 04:02:21.094451  385277 api_server.go:279] https://192.168.61.13:8444/healthz returned 200:
	ok
	I1128 04:02:21.096308  385277 api_server.go:141] control plane version: v1.28.4
	I1128 04:02:21.096333  385277 api_server.go:131] duration metric: took 3.953933505s to wait for apiserver health ...
	I1128 04:02:21.096343  385277 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:21.096371  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:21.096431  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:21.144869  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:21.144908  385277 cri.go:89] found id: ""
	I1128 04:02:21.144920  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:21.144987  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.149714  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:21.149790  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:21.192196  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.192230  385277 cri.go:89] found id: ""
	I1128 04:02:21.192242  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:21.192307  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.196964  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:21.197040  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:21.234749  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:21.234775  385277 cri.go:89] found id: ""
	I1128 04:02:21.234785  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:21.234845  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.239486  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:21.239574  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:21.275950  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.275980  385277 cri.go:89] found id: ""
	I1128 04:02:21.275991  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:21.276069  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.280518  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:21.280591  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:21.325941  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:21.325967  385277 cri.go:89] found id: ""
	I1128 04:02:21.325977  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:21.326038  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.330959  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:21.331031  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:21.376605  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.376636  385277 cri.go:89] found id: ""
	I1128 04:02:21.376648  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:21.376717  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.382609  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:21.382686  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:21.434065  385277 cri.go:89] found id: ""
	I1128 04:02:21.434102  385277 logs.go:284] 0 containers: []
	W1128 04:02:21.434113  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:21.434121  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:21.434191  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:21.475230  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.475265  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.475272  385277 cri.go:89] found id: ""
	I1128 04:02:21.475300  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:21.475367  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.479918  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.483989  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:21.484014  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.550040  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:21.550086  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.604802  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:21.604854  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:21.667187  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:21.667230  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:21.735542  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:21.735591  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.778554  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:21.778600  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.841737  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:21.841776  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.885454  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:21.885494  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:22.264498  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:22.264545  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:22.281694  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:22.281727  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:22.441500  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:22.441548  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:22.516971  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:22.517015  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:22.570642  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:22.570676  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:25.123556  385277 system_pods.go:59] 8 kube-system pods found
	I1128 04:02:25.123590  385277 system_pods.go:61] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.123595  385277 system_pods.go:61] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.123600  385277 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.123604  385277 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.123608  385277 system_pods.go:61] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.123613  385277 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.123620  385277 system_pods.go:61] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.123626  385277 system_pods.go:61] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.123633  385277 system_pods.go:74] duration metric: took 4.027284696s to wait for pod list to return data ...
	I1128 04:02:25.123641  385277 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:25.127575  385277 default_sa.go:45] found service account: "default"
	I1128 04:02:25.127601  385277 default_sa.go:55] duration metric: took 3.954108ms for default service account to be created ...
	I1128 04:02:25.127611  385277 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:25.136183  385277 system_pods.go:86] 8 kube-system pods found
	I1128 04:02:25.136217  385277 system_pods.go:89] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.136224  385277 system_pods.go:89] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.136232  385277 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.136240  385277 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.136246  385277 system_pods.go:89] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.136253  385277 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.136266  385277 system_pods.go:89] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.136280  385277 system_pods.go:89] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.136291  385277 system_pods.go:126] duration metric: took 8.673655ms to wait for k8s-apps to be running ...
	I1128 04:02:25.136303  385277 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:25.136362  385277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:25.158811  385277 system_svc.go:56] duration metric: took 22.495299ms WaitForService to wait for kubelet.
	I1128 04:02:25.158862  385277 kubeadm.go:581] duration metric: took 4m22.865858856s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:25.158891  385277 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:25.162679  385277 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:25.162706  385277 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:25.162717  385277 node_conditions.go:105] duration metric: took 3.821419ms to run NodePressure ...
	I1128 04:02:25.162745  385277 start.go:228] waiting for startup goroutines ...
	I1128 04:02:25.162751  385277 start.go:233] waiting for cluster config update ...
	I1128 04:02:25.162760  385277 start.go:242] writing updated cluster config ...
	I1128 04:02:25.163075  385277 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:25.217545  385277 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:02:25.219820  385277 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-725962" cluster and "default" namespace by default
	I1128 04:02:28.624093  385190 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.0
	I1128 04:02:28.624173  385190 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:02:28.624301  385190 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:02:28.624444  385190 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:02:28.624561  385190 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:02:28.624641  385190 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:02:28.626365  385190 out.go:204]   - Generating certificates and keys ...
	I1128 04:02:28.626465  385190 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:02:28.626548  385190 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:02:28.626645  385190 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:02:28.626719  385190 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:02:28.626828  385190 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:02:28.626908  385190 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:02:28.626985  385190 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:02:28.627057  385190 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:02:28.627166  385190 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:02:28.627259  385190 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:02:28.627315  385190 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:02:28.627384  385190 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:02:28.627442  385190 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:02:28.627513  385190 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1128 04:02:28.627573  385190 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:02:28.627653  385190 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:02:28.627717  385190 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:02:28.627821  385190 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:02:28.627901  385190 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:02:28.629387  385190 out.go:204]   - Booting up control plane ...
	I1128 04:02:28.629496  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:02:28.629593  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:02:28.629701  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:02:28.629825  385190 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:02:28.629933  385190 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:02:28.629985  385190 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:02:28.630182  385190 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:02:28.630292  385190 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502940 seconds
	I1128 04:02:28.630437  385190 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:02:28.630586  385190 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:02:28.630656  385190 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:02:28.630869  385190 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-222348 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:02:28.630937  385190 kubeadm.go:322] [bootstrap-token] Using token: 7e8qc3.nnytwd8q8fl84l6i
	I1128 04:02:28.632838  385190 out.go:204]   - Configuring RBAC rules ...
	I1128 04:02:28.632987  385190 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:02:28.633108  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:02:28.633273  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:02:28.633455  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:02:28.633635  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:02:28.633737  385190 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:02:28.633909  385190 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:02:28.633964  385190 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:02:28.634003  385190 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:02:28.634009  385190 kubeadm.go:322] 
	I1128 04:02:28.634063  385190 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:02:28.634070  385190 kubeadm.go:322] 
	I1128 04:02:28.634130  385190 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:02:28.634136  385190 kubeadm.go:322] 
	I1128 04:02:28.634157  385190 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:02:28.634205  385190 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:02:28.634250  385190 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:02:28.634256  385190 kubeadm.go:322] 
	I1128 04:02:28.634333  385190 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:02:28.634349  385190 kubeadm.go:322] 
	I1128 04:02:28.634438  385190 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:02:28.634462  385190 kubeadm.go:322] 
	I1128 04:02:28.634525  385190 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:02:28.634659  385190 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:02:28.634759  385190 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:02:28.634773  385190 kubeadm.go:322] 
	I1128 04:02:28.634879  385190 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:02:28.634957  385190 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:02:28.634965  385190 kubeadm.go:322] 
	I1128 04:02:28.635041  385190 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635153  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:02:28.635188  385190 kubeadm.go:322] 	--control-plane 
	I1128 04:02:28.635197  385190 kubeadm.go:322] 
	I1128 04:02:28.635304  385190 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:02:28.635313  385190 kubeadm.go:322] 
	I1128 04:02:28.635411  385190 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635541  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:02:28.635574  385190 cni.go:84] Creating CNI manager for ""
	I1128 04:02:28.635588  385190 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:28.637435  385190 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:02:28.638928  385190 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:02:25.536491  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:28.037478  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:26.077199  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:28.654704  385190 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:02:28.714435  385190 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:02:28.714516  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.714524  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=no-preload-222348 minikube.k8s.io/updated_at=2023_11_28T04_02_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.790761  385190 ops.go:34] apiserver oom_adj: -16
	I1128 04:02:28.965788  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.082351  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.680586  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.181037  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.680560  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.181252  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.680411  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.180401  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.681195  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:33.180867  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.535026  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.536808  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.161184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:33.680538  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.180615  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.680359  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.180746  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.681099  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.180588  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.681059  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.180397  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.680629  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:38.180710  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.036694  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:37.535611  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:35.229145  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:38.681268  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.180491  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.680634  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.180761  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.681057  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.180983  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.309439  385190 kubeadm.go:1081] duration metric: took 12.594981015s to wait for elevateKubeSystemPrivileges.
	I1128 04:02:41.309479  385190 kubeadm.go:406] StartCluster complete in 5m13.943228432s
	I1128 04:02:41.309503  385190 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.309588  385190 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:41.311897  385190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.312215  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:02:41.312322  385190 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:02:41.312407  385190 addons.go:69] Setting storage-provisioner=true in profile "no-preload-222348"
	I1128 04:02:41.312422  385190 addons.go:69] Setting default-storageclass=true in profile "no-preload-222348"
	I1128 04:02:41.312436  385190 addons.go:231] Setting addon storage-provisioner=true in "no-preload-222348"
	I1128 04:02:41.312438  385190 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-222348"
	W1128 04:02:41.312445  385190 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:02:41.312446  385190 addons.go:69] Setting metrics-server=true in profile "no-preload-222348"
	I1128 04:02:41.312462  385190 config.go:182] Loaded profile config "no-preload-222348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 04:02:41.312475  385190 addons.go:231] Setting addon metrics-server=true in "no-preload-222348"
	W1128 04:02:41.312485  385190 addons.go:240] addon metrics-server should already be in state true
	I1128 04:02:41.312510  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312537  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312960  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312985  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.328695  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I1128 04:02:41.328709  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44013
	I1128 04:02:41.328795  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I1128 04:02:41.332632  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332652  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332640  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.333191  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333213  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333323  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333340  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333358  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333344  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333610  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333774  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333826  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.334168  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334182  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.334399  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.334587  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334602  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.338095  385190 addons.go:231] Setting addon default-storageclass=true in "no-preload-222348"
	W1128 04:02:41.338117  385190 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:02:41.338150  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.338562  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.338582  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.351757  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I1128 04:02:41.352462  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.353001  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.353018  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.353432  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.353689  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.354246  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1128 04:02:41.354837  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.355324  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.355342  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.355772  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.356535  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.356577  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.356832  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33321
	I1128 04:02:41.357390  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.357499  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.359297  385190 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:02:41.357865  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.360511  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.360704  385190 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.360715  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:02:41.360729  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.361075  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.361268  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.363830  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.365783  385190 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:02:41.364607  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.365384  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.367315  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:02:41.367328  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:02:41.367348  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.367398  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.367414  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.367426  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.368068  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.368272  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.370196  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370716  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.370740  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370820  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.371038  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.371144  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.371280  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.374445  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1128 04:02:41.374734  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.375079  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.375089  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.375305  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.375403  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.376672  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.376916  385190 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.376931  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:02:41.376944  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.379448  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379800  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.379839  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379946  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.380070  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.380154  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.380223  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.388696  385190 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-222348" context rescaled to 1 replicas
	I1128 04:02:41.388733  385190 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:02:41.390613  385190 out.go:177] * Verifying Kubernetes components...
	I1128 04:02:41.391975  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:41.644941  385190 node_ready.go:35] waiting up to 6m0s for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.645100  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:02:41.665031  385190 node_ready.go:49] node "no-preload-222348" has status "Ready":"True"
	I1128 04:02:41.665067  385190 node_ready.go:38] duration metric: took 20.088639ms waiting for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.665082  385190 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:41.682673  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:41.759560  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:02:41.759595  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:02:41.905887  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.922496  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.955296  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:02:41.955331  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:02:42.013986  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.014023  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:02:42.104936  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.373507  385190 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 04:02:43.023075  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117131952s)
	I1128 04:02:43.023099  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.100573063s)
	I1128 04:02:43.023137  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023153  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023217  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023235  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023471  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023491  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023502  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023510  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023615  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023659  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023682  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023704  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023724  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023704  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023898  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023917  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.116124  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.116162  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.116591  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.116636  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.116648  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.309617  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.204630924s)
	I1128 04:02:43.309676  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.309689  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310010  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310031  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310043  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.310051  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310313  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310331  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310343  385190 addons.go:467] Verifying addon metrics-server=true in "no-preload-222348"
	I1128 04:02:43.312005  385190 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 04:02:43.313519  385190 addons.go:502] enable addons completed in 2.001198411s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 04:02:39.536572  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:42.036107  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:41.309196  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:44.385117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:43.735794  385190 pod_ready.go:102] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:45.228427  385190 pod_ready.go:92] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.228457  385190 pod_ready.go:81] duration metric: took 3.545740844s waiting for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.228470  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234714  385190 pod_ready.go:92] pod "coredns-76f75df574-nxnkf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.234747  385190 pod_ready.go:81] duration metric: took 6.268663ms waiting for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234767  385190 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240363  385190 pod_ready.go:92] pod "etcd-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.240386  385190 pod_ready.go:81] duration metric: took 5.606452ms waiting for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240397  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245748  385190 pod_ready.go:92] pod "kube-apiserver-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.245774  385190 pod_ready.go:81] duration metric: took 5.367922ms waiting for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245786  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251475  385190 pod_ready.go:92] pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.251498  385190 pod_ready.go:81] duration metric: took 5.703821ms waiting for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251506  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050247  385190 pod_ready.go:92] pod "kube-proxy-2cf7h" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.050276  385190 pod_ready.go:81] duration metric: took 798.763018ms waiting for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050285  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448834  385190 pod_ready.go:92] pod "kube-scheduler-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.448860  385190 pod_ready.go:81] duration metric: took 398.568611ms waiting for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448867  385190 pod_ready.go:38] duration metric: took 4.783773086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:46.448903  385190 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:02:46.448956  385190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:46.462941  385190 api_server.go:72] duration metric: took 5.074163925s to wait for apiserver process to appear ...
	I1128 04:02:46.463051  385190 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:46.463074  385190 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I1128 04:02:46.467657  385190 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I1128 04:02:46.468866  385190 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 04:02:46.468903  385190 api_server.go:131] duration metric: took 5.843376ms to wait for apiserver health ...
	I1128 04:02:46.468913  385190 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:46.655554  385190 system_pods.go:59] 9 kube-system pods found
	I1128 04:02:46.655587  385190 system_pods.go:61] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:46.655591  385190 system_pods.go:61] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:46.655595  385190 system_pods.go:61] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:46.655600  385190 system_pods.go:61] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:46.655605  385190 system_pods.go:61] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:46.655608  385190 system_pods.go:61] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:46.655612  385190 system_pods.go:61] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:46.655619  385190 system_pods.go:61] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:46.655623  385190 system_pods.go:61] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:46.655631  385190 system_pods.go:74] duration metric: took 186.709524ms to wait for pod list to return data ...
	I1128 04:02:46.655640  385190 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:46.849175  385190 default_sa.go:45] found service account: "default"
	I1128 04:02:46.849211  385190 default_sa.go:55] duration metric: took 193.561736ms for default service account to be created ...
	I1128 04:02:46.849224  385190 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:47.053165  385190 system_pods.go:86] 9 kube-system pods found
	I1128 04:02:47.053196  385190 system_pods.go:89] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:47.053202  385190 system_pods.go:89] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:47.053206  385190 system_pods.go:89] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:47.053210  385190 system_pods.go:89] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:47.053215  385190 system_pods.go:89] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:47.053219  385190 system_pods.go:89] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:47.053223  385190 system_pods.go:89] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:47.053230  385190 system_pods.go:89] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:47.053234  385190 system_pods.go:89] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:47.053244  385190 system_pods.go:126] duration metric: took 204.014035ms to wait for k8s-apps to be running ...
	I1128 04:02:47.053258  385190 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:47.053305  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:47.067411  385190 system_svc.go:56] duration metric: took 14.14274ms WaitForService to wait for kubelet.
	I1128 04:02:47.067436  385190 kubeadm.go:581] duration metric: took 5.678670521s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:47.067453  385190 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:47.249281  385190 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:47.249314  385190 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:47.249327  385190 node_conditions.go:105] duration metric: took 181.869484ms to run NodePressure ...
	I1128 04:02:47.249343  385190 start.go:228] waiting for startup goroutines ...
	I1128 04:02:47.249351  385190 start.go:233] waiting for cluster config update ...
	I1128 04:02:47.249363  385190 start.go:242] writing updated cluster config ...
	I1128 04:02:47.249683  385190 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:47.301859  385190 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.0 (minor skew: 1)
	I1128 04:02:47.304215  385190 out.go:177] * Done! kubectl is now configured to use "no-preload-222348" cluster and "default" namespace by default
	I1128 04:02:44.036258  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:46.535320  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:49.035723  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:51.036414  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.538606  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.501130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:56.038018  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:58.038148  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:56.573082  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:00.535454  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.536429  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.657139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:05.035677  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:07.535352  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:05.725166  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:10.035343  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:11.229133  384793 pod_ready.go:81] duration metric: took 4m0.000747713s waiting for pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:11.229186  384793 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 04:03:11.229223  384793 pod_ready.go:38] duration metric: took 4m1.198355321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:11.229295  384793 kubeadm.go:640] restartCluster took 5m7.227749733s
	W1128 04:03:11.229381  384793 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 04:03:11.229418  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 04:03:11.809110  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:14.877214  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:17.718633  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.489183339s)
	I1128 04:03:17.718715  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:17.739229  384793 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:03:17.757193  384793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:03:17.767831  384793 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:03:17.767891  384793 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1128 04:03:17.992007  384793 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:03:20.961191  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:24.033147  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:31.044187  384793 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1128 04:03:31.044276  384793 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:03:31.044375  384793 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:03:31.044493  384793 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:03:31.044609  384793 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:03:31.044732  384793 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:03:31.044843  384793 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:03:31.044947  384793 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1128 04:03:31.045000  384793 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:03:31.046699  384793 out.go:204]   - Generating certificates and keys ...
	I1128 04:03:31.046809  384793 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:03:31.046903  384793 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:03:31.047016  384793 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:03:31.047101  384793 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:03:31.047160  384793 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:03:31.047208  384793 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:03:31.047264  384793 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:03:31.047314  384793 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:03:31.047377  384793 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:03:31.047482  384793 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:03:31.047529  384793 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:03:31.047578  384793 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:03:31.047620  384793 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:03:31.047694  384793 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:03:31.047788  384793 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:03:31.047884  384793 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:03:31.047988  384793 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:03:31.049345  384793 out.go:204]   - Booting up control plane ...
	I1128 04:03:31.049473  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:03:31.049569  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:03:31.049662  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:03:31.049788  384793 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:03:31.049994  384793 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:03:31.050107  384793 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503287 seconds
	I1128 04:03:31.050234  384793 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:03:31.050420  384793 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:03:31.050527  384793 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:03:31.050654  384793 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-666657 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1128 04:03:31.050713  384793 kubeadm.go:322] [bootstrap-token] Using token: gf7r1p.pbcguwte29lkqg9w
	I1128 04:03:31.052000  384793 out.go:204]   - Configuring RBAC rules ...
	I1128 04:03:31.052092  384793 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:03:31.052210  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:03:31.052320  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:03:31.052413  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:03:31.052483  384793 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:03:31.052536  384793 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:03:31.052597  384793 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:03:31.052606  384793 kubeadm.go:322] 
	I1128 04:03:31.052674  384793 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:03:31.052686  384793 kubeadm.go:322] 
	I1128 04:03:31.052781  384793 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:03:31.052797  384793 kubeadm.go:322] 
	I1128 04:03:31.052818  384793 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:03:31.052928  384793 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:03:31.052973  384793 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:03:31.052982  384793 kubeadm.go:322] 
	I1128 04:03:31.053023  384793 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:03:31.053088  384793 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:03:31.053143  384793 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:03:31.053150  384793 kubeadm.go:322] 
	I1128 04:03:31.053220  384793 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1128 04:03:31.053286  384793 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:03:31.053292  384793 kubeadm.go:322] 
	I1128 04:03:31.053381  384793 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053534  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:03:31.053573  384793 kubeadm.go:322]     --control-plane 	  
	I1128 04:03:31.053582  384793 kubeadm.go:322] 
	I1128 04:03:31.053693  384793 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:03:31.053705  384793 kubeadm.go:322] 
	I1128 04:03:31.053806  384793 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053946  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:03:31.053966  384793 cni.go:84] Creating CNI manager for ""
	I1128 04:03:31.053976  384793 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:03:31.055505  384793 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:03:31.057142  384793 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:03:31.079411  384793 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:03:31.115893  384793 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:03:31.115971  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.115980  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=old-k8s-version-666657 minikube.k8s.io/updated_at=2023_11_28T04_03_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.155887  384793 ops.go:34] apiserver oom_adj: -16
	I1128 04:03:31.372659  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.491129  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.099198  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.598840  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.099309  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.599526  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:30.109176  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:33.181170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:34.099192  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:34.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.098837  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.599080  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.098595  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.599209  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.099078  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.599225  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.099115  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.599148  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.261149  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:39.099036  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.599363  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.099099  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.598700  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.099170  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.599370  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.099044  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.098743  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.599233  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.333168  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:44.099079  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:44.598797  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.098959  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.598648  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.098995  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.301569  384793 kubeadm.go:1081] duration metric: took 15.185662789s to wait for elevateKubeSystemPrivileges.
	I1128 04:03:46.301619  384793 kubeadm.go:406] StartCluster complete in 5m42.369662329s
	I1128 04:03:46.301646  384793 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.301755  384793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:03:46.304463  384793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.304778  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:03:46.304778  384793 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:03:46.304867  384793 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304898  384793 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304910  384793 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-666657"
	I1128 04:03:46.304911  384793 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-666657"
	W1128 04:03:46.304920  384793 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:03:46.304927  384793 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-666657"
	W1128 04:03:46.304935  384793 addons.go:240] addon metrics-server should already be in state true
	I1128 04:03:46.304934  384793 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-666657"
	I1128 04:03:46.304987  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.304988  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.305001  384793 config.go:182] Loaded profile config "old-k8s-version-666657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 04:03:46.305394  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305427  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305454  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305429  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305395  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305694  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.322961  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33891
	I1128 04:03:46.322979  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I1128 04:03:46.323376  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323388  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323820  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I1128 04:03:46.323904  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.323916  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324071  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324086  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324273  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.324410  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324528  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324590  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.324704  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324711  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.325059  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.325278  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325304  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.325499  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325519  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.328349  384793 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-666657"
	W1128 04:03:46.328365  384793 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:03:46.328393  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.328731  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.328750  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.342280  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45973
	I1128 04:03:46.343025  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.343737  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.343759  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.344269  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.344492  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.345036  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39033
	I1128 04:03:46.345665  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.346273  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.346301  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.346384  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.348493  384793 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:03:46.346866  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.349948  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:03:46.349966  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:03:46.349989  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.350099  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.352330  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.352432  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36429
	I1128 04:03:46.354071  384793 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:03:46.352959  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.354459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355328  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.355358  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355480  384793 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.355501  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:03:46.355518  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.355216  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.355803  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.356414  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.356435  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.356917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.357018  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.357108  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.357738  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.357769  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.358467  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.358922  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.358946  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.359072  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.359282  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.359403  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.359610  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.373628  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1128 04:03:46.374105  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.374866  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.374895  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.375314  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.375548  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.377265  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.377561  384793 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.377582  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:03:46.377603  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.380459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.380834  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.380864  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.381016  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.381169  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.381359  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.381466  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.409792  384793 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-666657" context rescaled to 1 replicas
	I1128 04:03:46.409842  384793 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:03:46.411454  384793 out.go:177] * Verifying Kubernetes components...
	I1128 04:03:46.413194  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:46.586767  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.631269  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.634383  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:03:46.634407  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:03:46.666152  384793 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.666176  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:03:46.674225  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:03:46.674248  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:03:46.713431  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:46.713461  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:03:46.793657  384793 node_ready.go:49] node "old-k8s-version-666657" has status "Ready":"True"
	I1128 04:03:46.793685  384793 node_ready.go:38] duration metric: took 127.497314ms waiting for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.793695  384793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:46.793699  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:47.263395  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:47.404099  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404139  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404445  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404485  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.404487  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:47.404506  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404519  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404786  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404809  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434537  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.434567  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.434929  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.434986  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434965  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.447368  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.816042626s)
	I1128 04:03:48.447386  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.781104735s)
	I1128 04:03:48.447415  384793 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1128 04:03:48.447423  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.447803  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.447818  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.447828  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447836  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.448143  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.448144  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.448166  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.746828  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.953085214s)
	I1128 04:03:48.746898  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.746917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747352  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.747378  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.747396  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.747420  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.747437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747692  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.749007  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.749027  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.749045  384793 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-666657"
	I1128 04:03:48.750820  384793 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 04:03:48.417150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:48.752378  384793 addons.go:502] enable addons completed in 2.447603022s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 04:03:49.504435  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.973968  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.485111  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:53.973462  384793 pod_ready.go:92] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.973491  384793 pod_ready.go:81] duration metric: took 6.710064476s waiting for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.973504  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.975383  384793 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975413  384793 pod_ready.go:81] duration metric: took 1.901164ms waiting for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:53.975426  384793 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975437  384793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980213  384793 pod_ready.go:92] pod "kube-proxy-fpjnf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.980239  384793 pod_ready.go:81] duration metric: took 4.79365ms waiting for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980249  384793 pod_ready.go:38] duration metric: took 7.186544585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:53.980270  384793 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:03:53.980322  384793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:03:53.995392  384793 api_server.go:72] duration metric: took 7.585507425s to wait for apiserver process to appear ...
	I1128 04:03:53.995438  384793 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:03:53.995455  384793 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I1128 04:03:54.002840  384793 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I1128 04:03:54.003953  384793 api_server.go:141] control plane version: v1.16.0
	I1128 04:03:54.003972  384793 api_server.go:131] duration metric: took 8.527968ms to wait for apiserver health ...
	I1128 04:03:54.003980  384793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:03:54.008155  384793 system_pods.go:59] 4 kube-system pods found
	I1128 04:03:54.008179  384793 system_pods.go:61] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.008184  384793 system_pods.go:61] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.008192  384793 system_pods.go:61] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.008196  384793 system_pods.go:61] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.008202  384793 system_pods.go:74] duration metric: took 4.21636ms to wait for pod list to return data ...
	I1128 04:03:54.008209  384793 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:03:54.010577  384793 default_sa.go:45] found service account: "default"
	I1128 04:03:54.010597  384793 default_sa.go:55] duration metric: took 2.383201ms for default service account to be created ...
	I1128 04:03:54.010603  384793 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:03:54.014085  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.014107  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.014114  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.014121  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.014125  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.014142  384793 retry.go:31] will retry after 305.81254ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.325645  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.325690  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.325700  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.325711  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.325717  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.325737  384793 retry.go:31] will retry after 265.004483ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.596427  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.596465  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.596472  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.596483  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.596491  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.596515  384793 retry.go:31] will retry after 379.763313ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.981569  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.981599  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.981607  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.981617  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.981624  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.981646  384793 retry.go:31] will retry after 439.396023ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.426531  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.426560  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.426565  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.426572  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.426577  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.426593  384793 retry.go:31] will retry after 551.563469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.983013  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.983042  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.983048  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.983055  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.983060  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.983076  384793 retry.go:31] will retry after 647.414701ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:56.635207  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:56.635238  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:56.635243  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:56.635251  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:56.635256  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:56.635276  384793 retry.go:31] will retry after 1.037316769s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.678748  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:57.678791  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:57.678800  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:57.678810  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:57.678815  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:57.678836  384793 retry.go:31] will retry after 1.167348672s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.565155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:58.851584  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:58.851615  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:58.851621  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:58.851627  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:58.851632  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:58.851649  384793 retry.go:31] will retry after 1.37796567s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.235244  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:00.235270  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:00.235276  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:00.235282  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:00.235288  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:00.235313  384793 retry.go:31] will retry after 2.090359712s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:02.330947  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:02.330984  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:02.331002  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:02.331013  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:02.331020  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:02.331041  384793 retry.go:31] will retry after 2.451255186s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.637193  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:04.787969  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:04.787999  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:04.788004  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:04.788011  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:04.788016  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:04.788033  384793 retry.go:31] will retry after 2.859833817s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:07.653629  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:07.653661  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:07.653667  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:07.653674  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:07.653679  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:07.653697  384793 retry.go:31] will retry after 4.226694897s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:06.721130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:09.789162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:11.886456  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:11.886488  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:11.886496  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:11.886503  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:11.886508  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:11.886538  384793 retry.go:31] will retry after 4.177038986s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:16.069291  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:16.069324  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:16.069330  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:16.069336  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:16.069341  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:16.069359  384793 retry.go:31] will retry after 4.273733761s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:15.869195  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:18.945228  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:20.347960  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:20.347992  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:20.347998  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:20.348004  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:20.348009  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:20.348028  384793 retry.go:31] will retry after 6.790786839s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:27.147442  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:27.147481  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:27.147489  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:27.147496  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Pending
	I1128 04:04:27.147506  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:27.147513  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:27.147532  384793 retry.go:31] will retry after 7.530763623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:25.021154  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:28.093157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.177177  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.684745  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:34.684783  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:34.684792  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:34.684799  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:34.684807  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:34.684813  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:34.684835  384793 retry.go:31] will retry after 10.243202989s: missing components: etcd, kube-apiserver, kube-controller-manager
	I1128 04:04:37.245170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:43.325131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:44.935423  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:04:44.935456  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:44.935462  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:04:44.935469  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Pending
	I1128 04:04:44.935474  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Pending
	I1128 04:04:44.935480  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:44.935486  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:44.935493  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:44.935498  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:44.935517  384793 retry.go:31] will retry after 15.895769684s: missing components: kube-apiserver, kube-controller-manager
	I1128 04:04:46.397235  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:52.481117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:55.549226  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:00.839171  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:05:00.839203  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:05:00.839209  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:05:00.839213  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Running
	I1128 04:05:00.839217  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Running
	I1128 04:05:00.839221  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:05:00.839225  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:05:00.839231  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:05:00.839236  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:05:00.839245  384793 system_pods.go:126] duration metric: took 1m6.828635432s to wait for k8s-apps to be running ...
	I1128 04:05:00.839253  384793 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:05:00.839308  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:05:00.858602  384793 system_svc.go:56] duration metric: took 19.336447ms WaitForService to wait for kubelet.
	I1128 04:05:00.858640  384793 kubeadm.go:581] duration metric: took 1m14.448764188s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:05:00.858663  384793 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:05:00.862657  384793 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:05:00.862682  384793 node_conditions.go:123] node cpu capacity is 2
	I1128 04:05:00.862695  384793 node_conditions.go:105] duration metric: took 4.026622ms to run NodePressure ...
	I1128 04:05:00.862709  384793 start.go:228] waiting for startup goroutines ...
	I1128 04:05:00.862721  384793 start.go:233] waiting for cluster config update ...
	I1128 04:05:00.862736  384793 start.go:242] writing updated cluster config ...
	I1128 04:05:00.863037  384793 ssh_runner.go:195] Run: rm -f paused
	I1128 04:05:00.914674  384793 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1128 04:05:00.916795  384793 out.go:177] 
	W1128 04:05:00.918292  384793 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1128 04:05:00.919711  384793 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1128 04:05:00.921263  384793 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-666657" cluster and "default" namespace by default
	I1128 04:05:01.629125  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:04.701205  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:10.781216  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:13.853213  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:19.933127  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:23.005456  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:29.085157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:32.161103  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:38.237107  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:41.313150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:47.389244  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:50.461131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:56.541162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:59.613200  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:05.693144  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:08.765184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:14.845161  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:17.921139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:23.997190  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:27.069225  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:33.149188  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:36.221163  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:42.301167  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:45.373156  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:51.453155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:54.525189  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:57.526358  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:06:57.526408  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:06:57.528448  388252 machine.go:91] provisioned docker machine in 4m37.381939051s
	I1128 04:06:57.528492  388252 fix.go:56] fixHost completed within 4m37.404595738s
	I1128 04:06:57.528498  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 4m37.404645524s
	W1128 04:06:57.528514  388252 start.go:691] error starting host: provision: host is not running
	W1128 04:06:57.528751  388252 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1128 04:06:57.528762  388252 start.go:706] Will try again in 5 seconds ...
	I1128 04:07:02.528995  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:07:02.529144  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 79.815µs
	I1128 04:07:02.529172  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:07:02.529180  388252 fix.go:54] fixHost starting: 
	I1128 04:07:02.529654  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:07:02.529689  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:07:02.545443  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I1128 04:07:02.546041  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:07:02.546627  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:07:02.546657  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:07:02.547002  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:07:02.547202  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:02.547393  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:07:02.549209  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Stopped err=<nil>
	I1128 04:07:02.549234  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	W1128 04:07:02.549378  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:07:02.551250  388252 out.go:177] * Restarting existing kvm2 VM for "embed-certs-672176" ...
	I1128 04:07:02.552611  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Start
	I1128 04:07:02.552792  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring networks are active...
	I1128 04:07:02.553615  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network default is active
	I1128 04:07:02.553928  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network mk-embed-certs-672176 is active
	I1128 04:07:02.554371  388252 main.go:141] libmachine: (embed-certs-672176) Getting domain xml...
	I1128 04:07:02.555218  388252 main.go:141] libmachine: (embed-certs-672176) Creating domain...
	I1128 04:07:03.867073  388252 main.go:141] libmachine: (embed-certs-672176) Waiting to get IP...
	I1128 04:07:03.868115  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:03.868595  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:03.868706  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:03.868567  389161 retry.go:31] will retry after 306.367802ms: waiting for machine to come up
	I1128 04:07:04.176148  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.176727  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.176760  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.176665  389161 retry.go:31] will retry after 349.820346ms: waiting for machine to come up
	I1128 04:07:04.528319  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.528804  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.528830  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.528753  389161 retry.go:31] will retry after 434.816613ms: waiting for machine to come up
	I1128 04:07:04.965453  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.965931  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.965964  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.965859  389161 retry.go:31] will retry after 504.812349ms: waiting for machine to come up
	I1128 04:07:05.472644  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.473150  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.473181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.473089  389161 retry.go:31] will retry after 512.859795ms: waiting for machine to come up
	I1128 04:07:05.987622  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.988077  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.988101  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.988023  389161 retry.go:31] will retry after 578.673806ms: waiting for machine to come up
	I1128 04:07:06.568420  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:06.568923  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:06.568957  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:06.568863  389161 retry.go:31] will retry after 1.101477644s: waiting for machine to come up
	I1128 04:07:07.671698  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:07.672126  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:07.672156  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:07.672054  389161 retry.go:31] will retry after 1.379684082s: waiting for machine to come up
	I1128 04:07:09.053227  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:09.053918  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:09.053950  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:09.053851  389161 retry.go:31] will retry after 1.775284772s: waiting for machine to come up
	I1128 04:07:10.831571  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:10.832140  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:10.832177  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:10.832065  389161 retry.go:31] will retry after 2.005203426s: waiting for machine to come up
	I1128 04:07:12.838667  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:12.839159  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:12.839187  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:12.839113  389161 retry.go:31] will retry after 2.403192486s: waiting for machine to come up
	I1128 04:07:15.244005  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:15.244513  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:15.244553  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:15.244427  389161 retry.go:31] will retry after 2.329820043s: waiting for machine to come up
	I1128 04:07:17.576268  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:17.576707  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:17.576748  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:17.576652  389161 retry.go:31] will retry after 4.220303586s: waiting for machine to come up
	I1128 04:07:21.801976  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802441  388252 main.go:141] libmachine: (embed-certs-672176) Found IP for machine: 192.168.72.208
	I1128 04:07:21.802469  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has current primary IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802483  388252 main.go:141] libmachine: (embed-certs-672176) Reserving static IP address...
	I1128 04:07:21.802890  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.802920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | skip adding static IP to network mk-embed-certs-672176 - found existing host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"}
	I1128 04:07:21.802939  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Getting to WaitForSSH function...
	I1128 04:07:21.802955  388252 main.go:141] libmachine: (embed-certs-672176) Reserved static IP address: 192.168.72.208
	I1128 04:07:21.802967  388252 main.go:141] libmachine: (embed-certs-672176) Waiting for SSH to be available...
	I1128 04:07:21.805675  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806052  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.806086  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806212  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH client type: external
	I1128 04:07:21.806237  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH private key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa (-rw-------)
	I1128 04:07:21.806261  388252 main.go:141] libmachine: (embed-certs-672176) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 04:07:21.806272  388252 main.go:141] libmachine: (embed-certs-672176) DBG | About to run SSH command:
	I1128 04:07:21.806284  388252 main.go:141] libmachine: (embed-certs-672176) DBG | exit 0
	I1128 04:07:21.897047  388252 main.go:141] libmachine: (embed-certs-672176) DBG | SSH cmd err, output: <nil>: 
	I1128 04:07:21.897443  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetConfigRaw
	I1128 04:07:21.898164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:21.901014  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901421  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.901454  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901679  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:07:21.901872  388252 machine.go:88] provisioning docker machine ...
	I1128 04:07:21.901891  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:21.902121  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902304  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:07:21.902318  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902482  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:21.905282  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905757  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.905798  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905977  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:21.906187  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906383  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906565  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:21.906734  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:21.907224  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:21.907254  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:07:22.042525  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-672176
	
	I1128 04:07:22.042553  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.045516  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.045916  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.045961  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.046143  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.046353  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046526  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046676  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.046861  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.047186  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.047207  388252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-672176' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-672176/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-672176' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:07:22.179515  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:07:22.179552  388252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 04:07:22.179578  388252 buildroot.go:174] setting up certificates
	I1128 04:07:22.179591  388252 provision.go:83] configureAuth start
	I1128 04:07:22.179602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:22.179940  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:22.182782  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183167  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.183199  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183344  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.185770  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186158  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.186195  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186348  388252 provision.go:138] copyHostCerts
	I1128 04:07:22.186407  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 04:07:22.186418  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 04:07:22.186494  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 04:07:22.186609  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 04:07:22.186623  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 04:07:22.186658  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 04:07:22.186756  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 04:07:22.186772  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 04:07:22.186830  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 04:07:22.186915  388252 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.embed-certs-672176 san=[192.168.72.208 192.168.72.208 localhost 127.0.0.1 minikube embed-certs-672176]
	I1128 04:07:22.268178  388252 provision.go:172] copyRemoteCerts
	I1128 04:07:22.268250  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:07:22.268305  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.270816  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271152  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.271181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271382  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.271571  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.271730  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.271880  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.362340  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 04:07:22.387591  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 04:07:22.412169  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 04:07:22.437185  388252 provision.go:86] duration metric: configureAuth took 257.574597ms
	I1128 04:07:22.437223  388252 buildroot.go:189] setting minikube options for container-runtime
	I1128 04:07:22.437418  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:07:22.437496  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.440503  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.440937  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.440984  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.441148  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.441414  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441626  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441808  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.442043  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.442369  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.442386  388252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:07:22.778314  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:07:22.778344  388252 machine.go:91] provisioned docker machine in 876.457785ms
	I1128 04:07:22.778392  388252 start.go:300] post-start starting for "embed-certs-672176" (driver="kvm2")
	I1128 04:07:22.778413  388252 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:07:22.778463  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:22.778894  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:07:22.778934  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.781750  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782161  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.782203  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782336  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.782653  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.782870  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.783045  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.876530  388252 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:07:22.881442  388252 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 04:07:22.881472  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 04:07:22.881541  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 04:07:22.881618  388252 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 04:07:22.881701  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:07:22.891393  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:22.914734  388252 start.go:303] post-start completed in 136.316733ms
	I1128 04:07:22.914771  388252 fix.go:56] fixHost completed within 20.385588986s
	I1128 04:07:22.914800  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.917856  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918267  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.918301  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918449  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.918697  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.918898  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.919051  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.919230  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.919548  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.919561  388252 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 04:07:23.037790  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701144442.982632661
	
	I1128 04:07:23.037817  388252 fix.go:206] guest clock: 1701144442.982632661
	I1128 04:07:23.037828  388252 fix.go:219] Guest: 2023-11-28 04:07:22.982632661 +0000 UTC Remote: 2023-11-28 04:07:22.914776935 +0000 UTC m=+302.972189005 (delta=67.855726ms)
	I1128 04:07:23.037853  388252 fix.go:190] guest clock delta is within tolerance: 67.855726ms
	I1128 04:07:23.037860  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 20.508701455s
	I1128 04:07:23.037879  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.038196  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:23.040928  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041276  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.041309  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041473  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042009  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042217  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042315  388252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:07:23.042380  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.042447  388252 ssh_runner.go:195] Run: cat /version.json
	I1128 04:07:23.042479  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.045070  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045430  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.045459  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045478  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045634  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.045826  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.045987  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.045998  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.046020  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.046131  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.046197  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.046338  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.046455  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.046594  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.158653  388252 ssh_runner.go:195] Run: systemctl --version
	I1128 04:07:23.164496  388252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:07:23.313946  388252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 04:07:23.320220  388252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 04:07:23.320326  388252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:07:23.339262  388252 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 04:07:23.339296  388252 start.go:472] detecting cgroup driver to use...
	I1128 04:07:23.339401  388252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:07:23.352989  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:07:23.367735  388252 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:07:23.367797  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:07:23.382143  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:07:23.395983  388252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 04:07:23.513475  388252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:07:23.657449  388252 docker.go:219] disabling docker service ...
	I1128 04:07:23.657531  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:07:23.672662  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:07:23.685142  388252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:07:23.810404  388252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:07:23.929413  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:07:23.942971  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:07:23.961419  388252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 04:07:23.961493  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.971562  388252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 04:07:23.971643  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.981660  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.992472  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:24.002748  388252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 04:07:24.016234  388252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 04:07:24.025560  388252 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 04:07:24.025629  388252 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 04:07:24.039085  388252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 04:07:24.048324  388252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 04:07:24.160507  388252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 04:07:24.331205  388252 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 04:07:24.331292  388252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 04:07:24.336480  388252 start.go:540] Will wait 60s for crictl version
	I1128 04:07:24.336541  388252 ssh_runner.go:195] Run: which crictl
	I1128 04:07:24.341052  388252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 04:07:24.376784  388252 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 04:07:24.376910  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.425035  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.485230  388252 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 04:07:24.486822  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:24.490127  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490529  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:24.490558  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490733  388252 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1128 04:07:24.494881  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:24.510006  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:07:24.510097  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:24.549615  388252 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 04:07:24.549699  388252 ssh_runner.go:195] Run: which lz4
	I1128 04:07:24.554039  388252 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 04:07:24.558068  388252 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 04:07:24.558101  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 04:07:26.358503  388252 crio.go:444] Took 1.804493 seconds to copy over tarball
	I1128 04:07:26.358586  388252 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 04:07:29.679041  388252 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.320417818s)
	I1128 04:07:29.679072  388252 crio.go:451] Took 3.320535 seconds to extract the tarball
	I1128 04:07:29.679086  388252 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 04:07:29.723905  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:29.774544  388252 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 04:07:29.774574  388252 cache_images.go:84] Images are preloaded, skipping loading
	I1128 04:07:29.774683  388252 ssh_runner.go:195] Run: crio config
	I1128 04:07:29.841740  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:29.841767  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:29.841792  388252 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 04:07:29.841826  388252 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-672176 NodeName:embed-certs-672176 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 04:07:29.842004  388252 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-672176"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 04:07:29.842115  388252 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-672176 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 04:07:29.842184  388252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 04:07:29.854017  388252 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 04:07:29.854103  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 04:07:29.863871  388252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1128 04:07:29.880656  388252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 04:07:29.899138  388252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1128 04:07:29.919697  388252 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I1128 04:07:29.924087  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:29.936814  388252 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176 for IP: 192.168.72.208
	I1128 04:07:29.936851  388252 certs.go:190] acquiring lock for shared ca certs: {Name:mk57c0483467fb0022a439f1b546194ca653d1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:07:29.937053  388252 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key
	I1128 04:07:29.937097  388252 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key
	I1128 04:07:29.937198  388252 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/client.key
	I1128 04:07:29.937274  388252 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key.9e96c9f0
	I1128 04:07:29.937334  388252 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key
	I1128 04:07:29.937491  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem (1338 bytes)
	W1128 04:07:29.937524  388252 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515_empty.pem, impossibly tiny 0 bytes
	I1128 04:07:29.937535  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 04:07:29.937561  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem (1078 bytes)
	I1128 04:07:29.937586  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem (1123 bytes)
	I1128 04:07:29.937607  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem (1675 bytes)
	I1128 04:07:29.937698  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:29.938553  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 04:07:29.963444  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 04:07:29.988035  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 04:07:30.012981  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 04:07:30.219926  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 04:07:30.244077  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 04:07:30.268833  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 04:07:30.293921  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 04:07:30.322839  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /usr/share/ca-certificates/3405152.pem (1708 bytes)
	I1128 04:07:30.349783  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 04:07:30.374569  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem --> /usr/share/ca-certificates/340515.pem (1338 bytes)
	I1128 04:07:30.401804  388252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 04:07:30.420925  388252 ssh_runner.go:195] Run: openssl version
	I1128 04:07:30.427193  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3405152.pem && ln -fs /usr/share/ca-certificates/3405152.pem /etc/ssl/certs/3405152.pem"
	I1128 04:07:30.439369  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444359  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444455  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.451032  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3405152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 04:07:30.464110  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 04:07:30.477275  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483239  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483314  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.489884  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 04:07:30.501967  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340515.pem && ln -fs /usr/share/ca-certificates/340515.pem /etc/ssl/certs/340515.pem"
	I1128 04:07:30.514081  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519079  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519157  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.525194  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340515.pem /etc/ssl/certs/51391683.0"
	I1128 04:07:30.536594  388252 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 04:07:30.541041  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 04:07:30.547008  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 04:07:30.554317  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 04:07:30.561063  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 04:07:30.567355  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 04:07:30.573719  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 04:07:30.580010  388252 kubeadm.go:404] StartCluster: {Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:07:30.580166  388252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 04:07:30.580237  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:30.623908  388252 cri.go:89] found id: ""
	I1128 04:07:30.623980  388252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 04:07:30.635847  388252 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 04:07:30.635911  388252 kubeadm.go:636] restartCluster start
	I1128 04:07:30.635982  388252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 04:07:30.646523  388252 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.647648  388252 kubeconfig.go:92] found "embed-certs-672176" server: "https://192.168.72.208:8443"
	I1128 04:07:30.650037  388252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 04:07:30.660625  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.660703  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.674234  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.674258  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.674309  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.687276  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.188012  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.188122  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.201481  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.688057  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.688152  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.701564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.188188  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.201049  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.688113  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.688191  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.700824  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.187399  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.187517  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.200128  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.687562  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.687688  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.700564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.188276  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.188406  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.201686  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.688327  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.688426  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.701023  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.187672  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.187809  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.200598  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.688485  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.688565  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.701518  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.188131  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.188213  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.201708  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.688321  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.688430  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.701852  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.187395  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.187539  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.200267  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.688365  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.688447  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.701921  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.187456  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.187615  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.201388  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.687819  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.687933  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.700584  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.188195  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.201557  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.688192  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.688268  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.700990  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.187806  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:40.187918  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:40.201110  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.660853  388252 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 04:07:40.660908  388252 kubeadm.go:1128] stopping kube-system containers ...
	I1128 04:07:40.660926  388252 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 04:07:40.661008  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:40.706945  388252 cri.go:89] found id: ""
	I1128 04:07:40.707017  388252 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 04:07:40.724988  388252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:07:40.735077  388252 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:07:40.735165  388252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745110  388252 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745146  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:40.870777  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:41.851187  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.047008  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.129329  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.194986  388252 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:07:42.195074  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.210225  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.727622  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.227063  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.726928  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.227709  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.727790  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.756952  388252 api_server.go:72] duration metric: took 2.561964065s to wait for apiserver process to appear ...
	I1128 04:07:44.756989  388252 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:07:44.757011  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.757778  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:44.757838  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.758268  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:45.258785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.416741  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.416771  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.416785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.484252  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.484292  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.758607  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.765159  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:49.765189  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.258770  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.264464  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.264499  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.759164  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.765206  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.765246  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:51.258591  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:51.264758  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I1128 04:07:51.274077  388252 api_server.go:141] control plane version: v1.28.4
	I1128 04:07:51.274110  388252 api_server.go:131] duration metric: took 6.517112692s to wait for apiserver health ...
	I1128 04:07:51.274122  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:51.274130  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:51.276088  388252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:07:51.277582  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:07:51.302050  388252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:07:51.355400  388252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:07:51.371543  388252 system_pods.go:59] 8 kube-system pods found
	I1128 04:07:51.371592  388252 system_pods.go:61] "coredns-5dd5756b68-296l9" [a79e060e-b757-46b9-882e-5f065aed0f46] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 04:07:51.371605  388252 system_pods.go:61] "etcd-embed-certs-672176" [610938df-5b75-4fef-b632-19af73d74dab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 04:07:51.371623  388252 system_pods.go:61] "kube-apiserver-embed-certs-672176" [3e513b84-29f4-4285-aea3-963078fa9e74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 04:07:51.371633  388252 system_pods.go:61] "kube-controller-manager-embed-certs-672176" [6fb9a912-0c05-47d1-8420-26d0bbbe92c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 04:07:51.371640  388252 system_pods.go:61] "kube-proxy-4cvwh" [9882c0aa-5c66-4b53-8c8e-827c1cddaac5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 04:07:51.371652  388252 system_pods.go:61] "kube-scheduler-embed-certs-672176" [2d7c706d-f01b-4e80-ba35-8ef97f27faa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 04:07:51.371659  388252 system_pods.go:61] "metrics-server-57f55c9bc5-sbkpc" [ea558db5-2aab-4e1e-aa62-a4595172d108] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:07:51.371666  388252 system_pods.go:61] "storage-provisioner" [96737dd7-931e-4ac5-b662-c560a4b6642e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 04:07:51.371676  388252 system_pods.go:74] duration metric: took 16.247766ms to wait for pod list to return data ...
	I1128 04:07:51.371694  388252 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:07:51.376458  388252 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:07:51.376495  388252 node_conditions.go:123] node cpu capacity is 2
	I1128 04:07:51.376508  388252 node_conditions.go:105] duration metric: took 4.80925ms to run NodePressure ...
	I1128 04:07:51.376539  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:51.778110  388252 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 04:07:51.786916  388252 kubeadm.go:787] kubelet initialised
	I1128 04:07:51.787002  388252 kubeadm.go:788] duration metric: took 8.859672ms waiting for restarted kubelet to initialise ...
	I1128 04:07:51.787019  388252 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:07:51.799380  388252 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.807214  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807261  388252 pod_ready.go:81] duration metric: took 7.829357ms waiting for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.807274  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807299  388252 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.814516  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814550  388252 pod_ready.go:81] duration metric: took 7.235029ms waiting for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.814569  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814576  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.827729  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827759  388252 pod_ready.go:81] duration metric: took 13.172422ms waiting for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.827768  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827774  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:54.190842  388252 pod_ready.go:102] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:07:56.189656  388252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.189758  388252 pod_ready.go:81] duration metric: took 4.36196703s waiting for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.189779  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196462  388252 pod_ready.go:92] pod "kube-proxy-4cvwh" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.196503  388252 pod_ready.go:81] duration metric: took 6.707028ms waiting for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196517  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:58.590819  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:00.590953  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:02.595296  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:04.592801  388252 pod_ready.go:92] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:08:04.592826  388252 pod_ready.go:81] duration metric: took 8.396301174s waiting for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:04.592839  388252 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:06.618794  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:08.619204  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:11.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:13.618160  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:15.619404  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:17.620107  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:20.118789  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:22.119626  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:24.619088  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:26.619353  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:29.118548  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:31.118625  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:33.122964  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:35.620077  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:38.118800  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:40.618996  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:42.619252  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:45.118801  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:47.118987  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:49.619233  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:52.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:54.120044  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:56.619768  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:59.119321  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:01.119784  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:03.619289  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:06.119695  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:08.618767  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:10.620952  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:13.119086  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:15.121912  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:17.618200  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:19.619428  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:22.117316  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:24.118147  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:26.119945  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:28.619687  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:30.619772  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:33.118414  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:35.622173  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:38.118091  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:40.118723  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:42.119551  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:44.119931  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:46.619572  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:48.620898  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:51.118343  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:53.619215  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:56.119440  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:58.620299  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:01.118313  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:03.618615  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:05.619056  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:07.622475  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:10.117858  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:12.119468  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:14.619203  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:16.619540  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:19.118749  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:21.619618  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:23.620623  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:26.118183  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:28.118246  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:30.618282  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:33.117841  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:35.122904  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:37.619116  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:40.118304  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:42.618264  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:44.621653  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:47.119733  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:49.618284  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:51.619099  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:54.118728  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:56.121041  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:58.618237  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:00.619430  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:03.119263  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:05.619558  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:07.620571  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:10.117924  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:12.118001  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:14.119916  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:16.618621  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:18.620149  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:21.118296  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:23.118614  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 03:57:20 UTC, ends at Tue 2023-11-28 04:11:26 UTC. --
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.675411624Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:43660dd16af48203ea06d886b46f4f7b8eb9fb1b1d9161ea7c12b2abf4307511,Metadata:&PodSandboxMetadata{Name:busybox,Uid:74311fc7-06a5-4161-8803-f0ff8bf14071,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701143886525443498,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74311fc7-06a5-4161-8803-f0ff8bf14071,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T03:57:58.541234676Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f242bc7227c3cee21092d232805479d93e0693ea7f9cb7c76b426f8ffb11c221,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-5pf9p,Uid:ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:170114
3886502536130,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-5pf9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T03:57:58.541240104Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:371d83f79b599ffe94f23043f1a6d07b77e8698f93df87aee17f5db1522948bb,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-9bqg8,Uid:48d11dc2-ea03-4b2d-ac8b-afa0c6273c80,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701143883087754961,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-9bqg8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48d11dc2-ea03-4b2d-ac8b-afa0c6273c80,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28
T03:57:58.541248905Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0c1d33643e6bb92d0e3e511b57c1a43a5740fbd605f33c86180ba3b796dcddd2,Metadata:&PodSandboxMetadata{Name:kube-proxy-sp9nc,Uid:b54c0c14-5531-417f-8ce9-547c4bc9c9cf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701143878903780591,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-sp9nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b54c0c14-5531-417f-8ce9-547c4bc9c9cf,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T03:57:58.541247849Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:074eb0a7-45ef-4b63-9068-e061637207f7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701143878881877216,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2023-11-28T03:57:58.541244102Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a2204d42ef00c55ed3c47ec0b7f04e5b2b57a4f5ff89847f5f09673d25d1eb5f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-725962,Uid:e6e06547bea8addecb08d9ab4c2c3384,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701143872080581798,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e06547bea8addecb08d9ab4c2c3384,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e6e06547bea8addecb08d9ab4c2c3384,kubernetes.io/config.seen: 2023-11-28T03:57:51.528183160Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:eac1d0b2f521531b3826108aaa857c4dc70ce03d5768b4a9e900a43168947cb2,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-72
5962,Uid:97fadb1204004b279b9d2aaedce5fe68,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701143872062153380,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97fadb1204004b279b9d2aaedce5fe68,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 97fadb1204004b279b9d2aaedce5fe68,kubernetes.io/config.seen: 2023-11-28T03:57:51.528182436Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a183800045b25df89f76001936af7188a3a2b4ae5cfbdf5be1846c94ae6052b2,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-725962,Uid:89490cdb2aefb35198720f14b435f087,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701143872034887447,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-def
ault-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89490cdb2aefb35198720f14b435f087,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.13:8444,kubernetes.io/config.hash: 89490cdb2aefb35198720f14b435f087,kubernetes.io/config.seen: 2023-11-28T03:57:51.528181493Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2a57f714a961f291f47ce194ad330aa0badc719d7430fd8d69da7d1cbdb75c12,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-725962,Uid:6e3299c0250acac00f1296eb7f1ff28d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701143872023169510,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3299c0250acac00f1296eb7f1ff28d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-clien
t-urls: https://192.168.61.13:2379,kubernetes.io/config.hash: 6e3299c0250acac00f1296eb7f1ff28d,kubernetes.io/config.seen: 2023-11-28T03:57:51.528178039Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=59280ad2-58cf-497b-903a-5e55f11df84e name=/runtime.v1.RuntimeService/ListPodSandbox
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.676029413Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b09ba2c9-035d-4832-963e-54bced340f98 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.676097626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b09ba2c9-035d-4832-963e-54bced340f98 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.676966983Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701143910928585218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:603734d47a89fc8412409020cf1963bed92f2194265626114efe26478defef0e,PodSandboxId:43660dd16af48203ea06d886b46f4f7b8eb9fb1b1d9161ea7c12b2abf4307511,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701143888388164565,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74311fc7-06a5-4161-8803-f0ff8bf14071,},Annotations:map[string]string{io.kubernetes.container.hash: aadc8863,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371,PodSandboxId:f242bc7227c3cee21092d232805479d93e0693ea7f9cb7c76b426f8ffb11c221,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701143887326667328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5pf9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f,},Annotations:map[string]string{io.kubernetes.container.hash: a223f807,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701143880061507130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7,PodSandboxId:0c1d33643e6bb92d0e3e511b57c1a43a5740fbd605f33c86180ba3b796dcddd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701143880007941250,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sp9nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
54c0c14-5531-417f-8ce9-547c4bc9c9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 95100269,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58,PodSandboxId:2a57f714a961f291f47ce194ad330aa0badc719d7430fd8d69da7d1cbdb75c12,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701143873288217575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3299c0250acac00f1296eb7f1ff28d,},An
notations:map[string]string{io.kubernetes.container.hash: 6850a9ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe,PodSandboxId:a2204d42ef00c55ed3c47ec0b7f04e5b2b57a4f5ff89847f5f09673d25d1eb5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701143873201657779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e06547bea8addecb08d9ab4c2c3384,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be,PodSandboxId:a183800045b25df89f76001936af7188a3a2b4ae5cfbdf5be1846c94ae6052b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701143872885636361,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89490cdb2aefb35198720f14b435f087,},An
notations:map[string]string{io.kubernetes.container.hash: ff69feba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639,PodSandboxId:eac1d0b2f521531b3826108aaa857c4dc70ce03d5768b4a9e900a43168947cb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701143872508468883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
7fadb1204004b279b9d2aaedce5fe68,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b09ba2c9-035d-4832-963e-54bced340f98 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.702415850Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=54c8c34c-acab-4b1e-9a02-4bb62397ee27 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.702500264Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=54c8c34c-acab-4b1e-9a02-4bb62397ee27 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.713827367Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=b7c6b931-1a34-4c73-9b7b-e5ebbd633fc8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.714437740Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701144686714410714,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=b7c6b931-1a34-4c73-9b7b-e5ebbd633fc8 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.714927367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0df529c0-dba6-4ac7-b5a1-4afc7c9ec84c name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.715002979Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0df529c0-dba6-4ac7-b5a1-4afc7c9ec84c name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.715195878Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701143910928585218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:603734d47a89fc8412409020cf1963bed92f2194265626114efe26478defef0e,PodSandboxId:43660dd16af48203ea06d886b46f4f7b8eb9fb1b1d9161ea7c12b2abf4307511,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701143888388164565,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74311fc7-06a5-4161-8803-f0ff8bf14071,},Annotations:map[string]string{io.kubernetes.container.hash: aadc8863,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371,PodSandboxId:f242bc7227c3cee21092d232805479d93e0693ea7f9cb7c76b426f8ffb11c221,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701143887326667328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5pf9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f,},Annotations:map[string]string{io.kubernetes.container.hash: a223f807,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701143880061507130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7,PodSandboxId:0c1d33643e6bb92d0e3e511b57c1a43a5740fbd605f33c86180ba3b796dcddd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701143880007941250,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sp9nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
54c0c14-5531-417f-8ce9-547c4bc9c9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 95100269,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58,PodSandboxId:2a57f714a961f291f47ce194ad330aa0badc719d7430fd8d69da7d1cbdb75c12,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701143873288217575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3299c0250acac00f1296eb7f1ff28d,},An
notations:map[string]string{io.kubernetes.container.hash: 6850a9ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe,PodSandboxId:a2204d42ef00c55ed3c47ec0b7f04e5b2b57a4f5ff89847f5f09673d25d1eb5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701143873201657779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e06547bea8addecb08d9ab4c2c3384,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be,PodSandboxId:a183800045b25df89f76001936af7188a3a2b4ae5cfbdf5be1846c94ae6052b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701143872885636361,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89490cdb2aefb35198720f14b435f087,},An
notations:map[string]string{io.kubernetes.container.hash: ff69feba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639,PodSandboxId:eac1d0b2f521531b3826108aaa857c4dc70ce03d5768b4a9e900a43168947cb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701143872508468883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
7fadb1204004b279b9d2aaedce5fe68,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0df529c0-dba6-4ac7-b5a1-4afc7c9ec84c name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.758200486Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1a9ffa68-dfd4-47f6-9a4d-b5eeafb73783 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.758259600Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1a9ffa68-dfd4-47f6-9a4d-b5eeafb73783 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.761109592Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=911bcecc-713f-40cc-9024-c64587a0ffd9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.761593015Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701144686761574105,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=911bcecc-713f-40cc-9024-c64587a0ffd9 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.762186319Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1eee8c0d-ebbf-4852-9959-d55fd808f582 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.762261510Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1eee8c0d-ebbf-4852-9959-d55fd808f582 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.762600958Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701143910928585218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:603734d47a89fc8412409020cf1963bed92f2194265626114efe26478defef0e,PodSandboxId:43660dd16af48203ea06d886b46f4f7b8eb9fb1b1d9161ea7c12b2abf4307511,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701143888388164565,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74311fc7-06a5-4161-8803-f0ff8bf14071,},Annotations:map[string]string{io.kubernetes.container.hash: aadc8863,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371,PodSandboxId:f242bc7227c3cee21092d232805479d93e0693ea7f9cb7c76b426f8ffb11c221,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701143887326667328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5pf9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f,},Annotations:map[string]string{io.kubernetes.container.hash: a223f807,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701143880061507130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7,PodSandboxId:0c1d33643e6bb92d0e3e511b57c1a43a5740fbd605f33c86180ba3b796dcddd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701143880007941250,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sp9nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
54c0c14-5531-417f-8ce9-547c4bc9c9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 95100269,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58,PodSandboxId:2a57f714a961f291f47ce194ad330aa0badc719d7430fd8d69da7d1cbdb75c12,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701143873288217575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3299c0250acac00f1296eb7f1ff28d,},An
notations:map[string]string{io.kubernetes.container.hash: 6850a9ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe,PodSandboxId:a2204d42ef00c55ed3c47ec0b7f04e5b2b57a4f5ff89847f5f09673d25d1eb5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701143873201657779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e06547bea8addecb08d9ab4c2c3384,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be,PodSandboxId:a183800045b25df89f76001936af7188a3a2b4ae5cfbdf5be1846c94ae6052b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701143872885636361,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89490cdb2aefb35198720f14b435f087,},An
notations:map[string]string{io.kubernetes.container.hash: ff69feba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639,PodSandboxId:eac1d0b2f521531b3826108aaa857c4dc70ce03d5768b4a9e900a43168947cb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701143872508468883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
7fadb1204004b279b9d2aaedce5fe68,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1eee8c0d-ebbf-4852-9959-d55fd808f582 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.800149140Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=2fd9fb2c-d5dd-4b4c-9fbd-208caf7fdb5a name=/runtime.v1.RuntimeService/Version
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.800261086Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=2fd9fb2c-d5dd-4b4c-9fbd-208caf7fdb5a name=/runtime.v1.RuntimeService/Version
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.802225414Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2c5bf3dc-6a17-4897-a6a2-7697627e1ed1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.802672571Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701144686802659229,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=2c5bf3dc-6a17-4897-a6a2-7697627e1ed1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.803645891Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=efbd56d3-7ef7-4b0a-84e6-2e0d64ff73ee name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.803716191Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=efbd56d3-7ef7-4b0a-84e6-2e0d64ff73ee name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:26 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:11:26.803923777Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701143910928585218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:603734d47a89fc8412409020cf1963bed92f2194265626114efe26478defef0e,PodSandboxId:43660dd16af48203ea06d886b46f4f7b8eb9fb1b1d9161ea7c12b2abf4307511,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701143888388164565,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74311fc7-06a5-4161-8803-f0ff8bf14071,},Annotations:map[string]string{io.kubernetes.container.hash: aadc8863,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371,PodSandboxId:f242bc7227c3cee21092d232805479d93e0693ea7f9cb7c76b426f8ffb11c221,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701143887326667328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5pf9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f,},Annotations:map[string]string{io.kubernetes.container.hash: a223f807,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701143880061507130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7,PodSandboxId:0c1d33643e6bb92d0e3e511b57c1a43a5740fbd605f33c86180ba3b796dcddd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701143880007941250,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sp9nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
54c0c14-5531-417f-8ce9-547c4bc9c9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 95100269,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58,PodSandboxId:2a57f714a961f291f47ce194ad330aa0badc719d7430fd8d69da7d1cbdb75c12,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701143873288217575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3299c0250acac00f1296eb7f1ff28d,},An
notations:map[string]string{io.kubernetes.container.hash: 6850a9ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe,PodSandboxId:a2204d42ef00c55ed3c47ec0b7f04e5b2b57a4f5ff89847f5f09673d25d1eb5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701143873201657779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e06547bea8addecb08d9ab4c2c3384,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be,PodSandboxId:a183800045b25df89f76001936af7188a3a2b4ae5cfbdf5be1846c94ae6052b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701143872885636361,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89490cdb2aefb35198720f14b435f087,},An
notations:map[string]string{io.kubernetes.container.hash: ff69feba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639,PodSandboxId:eac1d0b2f521531b3826108aaa857c4dc70ce03d5768b4a9e900a43168947cb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701143872508468883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
7fadb1204004b279b9d2aaedce5fe68,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=efbd56d3-7ef7-4b0a-84e6-2e0d64ff73ee name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1806bf0461d3c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       3                   f00e09ac58f21       storage-provisioner
	603734d47a89f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   43660dd16af48       busybox
	4f1b83cb6065a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   f242bc7227c3c       coredns-5dd5756b68-5pf9p
	ef25aa6706867       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       2                   f00e09ac58f21       storage-provisioner
	3c249ebac5ace       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   0c1d33643e6bb       kube-proxy-sp9nc
	39b2c5787e96c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   2a57f714a961f       etcd-default-k8s-diff-port-725962
	09e3428759987       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   a2204d42ef00c       kube-scheduler-default-k8s-diff-port-725962
	d962ca3c6d6a3       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   a183800045b25       kube-apiserver-default-k8s-diff-port-725962
	59767f5d5ca26       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   eac1d0b2f5215       kube-controller-manager-default-k8s-diff-port-725962
	
	* 
	* ==> coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47928 - 62110 "HINFO IN 358128015453795916.2116480082888628902. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.027807955s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-725962
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-725962
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=default-k8s-diff-port-725962
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T03_48_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 03:48:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-725962
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 04:11:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 04:08:43 +0000   Tue, 28 Nov 2023 03:48:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 04:08:43 +0000   Tue, 28 Nov 2023 03:48:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 04:08:43 +0000   Tue, 28 Nov 2023 03:48:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 04:08:43 +0000   Tue, 28 Nov 2023 03:58:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.13
	  Hostname:    default-k8s-diff-port-725962
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 844aae334ccf47b7b0357768a02d626f
	  System UUID:                844aae33-4ccf-47b7-b035-7768a02d626f
	  Boot ID:                    7fe44eff-bca9-43e5-852a-449b02c0b7ca
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5dd5756b68-5pf9p                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     22m
	  kube-system                 etcd-default-k8s-diff-port-725962                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-default-k8s-diff-port-725962             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-725962    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-sp9nc                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-default-k8s-diff-port-725962             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-57f55c9bc5-9bqg8                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 22m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node default-k8s-diff-port-725962 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node default-k8s-diff-port-725962 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node default-k8s-diff-port-725962 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                kubelet          Node default-k8s-diff-port-725962 status is now: NodeReady
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-725962 event: Registered Node default-k8s-diff-port-725962 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-725962 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-725962 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-725962 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-725962 event: Registered Node default-k8s-diff-port-725962 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov28 03:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.077956] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.869783] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.824258] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154369] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.476450] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.862911] systemd-fstab-generator[621]: Ignoring "noauto" for root device
	[  +0.113655] systemd-fstab-generator[632]: Ignoring "noauto" for root device
	[  +0.155484] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.138605] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.228899] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[ +17.681888] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[Nov28 03:58] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] <==
	* {"level":"info","ts":"2023-11-28T03:58:29.139216Z","caller":"traceutil/trace.go:171","msg":"trace[394929356] transaction","detail":"{read_only:false; response_revision:580; number_of_response:1; }","duration":"156.424811ms","start":"2023-11-28T03:58:28.982763Z","end":"2023-11-28T03:58:29.139188Z","steps":["trace[394929356] 'process raft request'  (duration: 156.217208ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T03:58:29.4598Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"209.645644ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1630738557940094681 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-725962\" mod_revision:573 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-725962\" value_size:531 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-725962\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-28T03:58:29.460914Z","caller":"traceutil/trace.go:171","msg":"trace[1389000462] transaction","detail":"{read_only:false; response_revision:581; number_of_response:1; }","duration":"343.168082ms","start":"2023-11-28T03:58:29.117733Z","end":"2023-11-28T03:58:29.460901Z","steps":["trace[1389000462] 'process raft request'  (duration: 132.197242ms)","trace[1389000462] 'compare'  (duration: 209.291133ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-28T03:58:29.461034Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T03:58:29.117713Z","time spent":"343.271106ms","remote":"127.0.0.1:58292","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":600,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-725962\" mod_revision:573 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-725962\" value_size:531 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-725962\" > >"}
	{"level":"info","ts":"2023-11-28T03:58:29.461115Z","caller":"traceutil/trace.go:171","msg":"trace[998512203] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"162.032768ms","start":"2023-11-28T03:58:29.299074Z","end":"2023-11-28T03:58:29.461106Z","steps":["trace[998512203] 'process raft request'  (duration: 161.581345ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T03:58:29.751646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.956617ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1630738557940094687 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:16a18c14138d56de>","response":"size:41"}
	{"level":"info","ts":"2023-11-28T03:58:29.751848Z","caller":"traceutil/trace.go:171","msg":"trace[108578154] linearizableReadLoop","detail":"{readStateIndex:628; appliedIndex:627; }","duration":"206.895236ms","start":"2023-11-28T03:58:29.544938Z","end":"2023-11-28T03:58:29.751833Z","steps":["trace[108578154] 'read index received'  (duration: 43.685145ms)","trace[108578154] 'applied index is now lower than readState.Index'  (duration: 163.208302ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-28T03:58:29.751967Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.035416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-28T03:58:29.752018Z","caller":"traceutil/trace.go:171","msg":"trace[381543735] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:582; }","duration":"207.093404ms","start":"2023-11-28T03:58:29.544914Z","end":"2023-11-28T03:58:29.752008Z","steps":["trace[381543735] 'agreement among raft nodes before linearized reading'  (duration: 206.993139ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T03:58:29.997133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.289108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-9bqg8\" ","response":"range_response_count:1 size:4036"}
	{"level":"info","ts":"2023-11-28T03:58:29.997233Z","caller":"traceutil/trace.go:171","msg":"trace[1330111037] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-9bqg8; range_end:; response_count:1; response_revision:583; }","duration":"125.397543ms","start":"2023-11-28T03:58:29.871811Z","end":"2023-11-28T03:58:29.997209Z","steps":["trace[1330111037] 'range keys from in-memory index tree'  (duration: 125.006097ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T04:07:30.302018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"526.535382ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1630738557940098037 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.13\" mod_revision:1028 > success:<request_put:<key:\"/registry/masterleases/192.168.61.13\" value_size:66 lease:1630738557940098034 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.13\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-28T04:07:30.302481Z","caller":"traceutil/trace.go:171","msg":"trace[1668307054] transaction","detail":"{read_only:false; response_revision:1036; number_of_response:1; }","duration":"646.462034ms","start":"2023-11-28T04:07:29.655976Z","end":"2023-11-28T04:07:30.302438Z","steps":["trace[1668307054] 'process raft request'  (duration: 646.366836ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T04:07:30.302609Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T04:07:29.655959Z","time spent":"646.591503ms","remote":"127.0.0.1:58270","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1034 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-11-28T04:07:30.302833Z","caller":"traceutil/trace.go:171","msg":"trace[1981600562] transaction","detail":"{read_only:false; response_revision:1035; number_of_response:1; }","duration":"652.035167ms","start":"2023-11-28T04:07:29.650782Z","end":"2023-11-28T04:07:30.302817Z","steps":["trace[1981600562] 'process raft request'  (duration: 124.469731ms)","trace[1981600562] 'compare'  (duration: 526.381329ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-28T04:07:30.302912Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T04:07:29.650766Z","time spent":"652.105813ms","remote":"127.0.0.1:58240","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.61.13\" mod_revision:1028 > success:<request_put:<key:\"/registry/masterleases/192.168.61.13\" value_size:66 lease:1630738557940098034 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.13\" > >"}
	{"level":"warn","ts":"2023-11-28T04:07:30.990811Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"432.694362ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-11-28T04:07:30.990905Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.461575ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1630738557940098043 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-mdjdvftesfkysykllfzksu6t4i\" mod_revision:1029 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-mdjdvftesfkysykllfzksu6t4i\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-mdjdvftesfkysykllfzksu6t4i\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-28T04:07:30.99101Z","caller":"traceutil/trace.go:171","msg":"trace[1434998855] transaction","detail":"{read_only:false; response_revision:1037; number_of_response:1; }","duration":"396.403208ms","start":"2023-11-28T04:07:30.594594Z","end":"2023-11-28T04:07:30.990997Z","steps":["trace[1434998855] 'process raft request'  (duration: 265.790792ms)","trace[1434998855] 'compare'  (duration: 129.84135ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-28T04:07:30.99107Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T04:07:30.594573Z","time spent":"396.471652ms","remote":"127.0.0.1:58292","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-mdjdvftesfkysykllfzksu6t4i\" mod_revision:1029 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-mdjdvftesfkysykllfzksu6t4i\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-mdjdvftesfkysykllfzksu6t4i\" > >"}
	{"level":"info","ts":"2023-11-28T04:07:30.990922Z","caller":"traceutil/trace.go:171","msg":"trace[1968723066] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1036; }","duration":"432.815922ms","start":"2023-11-28T04:07:30.558094Z","end":"2023-11-28T04:07:30.99091Z","steps":["trace[1968723066] 'range keys from in-memory index tree'  (duration: 432.628464ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T04:07:30.991246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T04:07:30.558082Z","time spent":"433.145761ms","remote":"127.0.0.1:58274","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":29,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"info","ts":"2023-11-28T04:07:56.45086Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":816}
	{"level":"info","ts":"2023-11-28T04:07:56.45915Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":816,"took":"7.958756ms","hash":2648089177}
	{"level":"info","ts":"2023-11-28T04:07:56.459226Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2648089177,"revision":816,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  04:11:27 up 14 min,  0 users,  load average: 0.23, 0.20, 0.17
	Linux default-k8s-diff-port-725962 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] <==
	* I1128 04:07:58.451436       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 04:07:59.451489       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:07:59.451722       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:07:59.451781       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:07:59.451495       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:07:59.452132       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:07:59.453475       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:08:58.266367       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 04:08:59.452974       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:08:59.453081       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:08:59.453109       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:08:59.454449       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:08:59.454568       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:08:59.454599       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:09:58.266452       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1128 04:10:58.265949       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 04:10:59.453519       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:10:59.453593       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:10:59.453607       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:10:59.454818       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:10:59.454945       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:10:59.454957       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] <==
	* I1128 04:05:41.455482       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:06:10.860923       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:06:11.465398       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:06:40.867987       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:06:41.474666       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:07:10.874759       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:07:11.483843       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:07:40.882032       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:07:41.495781       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:08:10.888691       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:08:11.506888       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:08:40.894500       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:08:41.516251       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:09:10.901210       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:09:11.528985       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1128 04:09:14.590560       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="383.373µs"
	I1128 04:09:25.592111       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="210.298µs"
	E1128 04:09:40.908614       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:09:41.538539       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:10:10.916208       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:10:11.552034       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:10:40.922998       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:10:41.563378       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:11:10.931218       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:11:11.576822       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] <==
	* I1128 03:58:00.338942       1 server_others.go:69] "Using iptables proxy"
	I1128 03:58:00.367046       1 node.go:141] Successfully retrieved node IP: 192.168.61.13
	I1128 03:58:00.512269       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1128 03:58:00.512412       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 03:58:00.521038       1 server_others.go:152] "Using iptables Proxier"
	I1128 03:58:00.521136       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 03:58:00.521570       1 server.go:846] "Version info" version="v1.28.4"
	I1128 03:58:00.521872       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 03:58:00.524113       1 config.go:188] "Starting service config controller"
	I1128 03:58:00.524173       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 03:58:00.524219       1 config.go:97] "Starting endpoint slice config controller"
	I1128 03:58:00.524241       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 03:58:00.524926       1 config.go:315] "Starting node config controller"
	I1128 03:58:00.524976       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 03:58:00.624782       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1128 03:58:00.625004       1 shared_informer.go:318] Caches are synced for service config
	I1128 03:58:00.625434       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] <==
	* I1128 03:57:55.267632       1 serving.go:348] Generated self-signed cert in-memory
	W1128 03:57:58.371890       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1128 03:57:58.371954       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 03:57:58.371972       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1128 03:57:58.371982       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1128 03:57:58.472700       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1128 03:57:58.472777       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 03:57:58.492420       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1128 03:57:58.492490       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1128 03:57:58.496417       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1128 03:57:58.496551       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1128 03:57:58.593814       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 03:57:20 UTC, ends at Tue 2023-11-28 04:11:27 UTC. --
	Nov 28 04:08:51 default-k8s-diff-port-725962 kubelet[904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:08:52 default-k8s-diff-port-725962 kubelet[904]: E1128 04:08:52.572611     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:09:03 default-k8s-diff-port-725962 kubelet[904]: E1128 04:09:03.618609     904 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 28 04:09:03 default-k8s-diff-port-725962 kubelet[904]: E1128 04:09:03.618658     904 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Nov 28 04:09:03 default-k8s-diff-port-725962 kubelet[904]: E1128 04:09:03.618865     904 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lgs2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-9bqg8_kube-system(48d11dc2-ea03-4b2d-ac8b-afa0c6273c80): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 28 04:09:03 default-k8s-diff-port-725962 kubelet[904]: E1128 04:09:03.618903     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:09:14 default-k8s-diff-port-725962 kubelet[904]: E1128 04:09:14.571855     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:09:25 default-k8s-diff-port-725962 kubelet[904]: E1128 04:09:25.573772     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:09:40 default-k8s-diff-port-725962 kubelet[904]: E1128 04:09:40.573107     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:09:51 default-k8s-diff-port-725962 kubelet[904]: E1128 04:09:51.576970     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:09:51 default-k8s-diff-port-725962 kubelet[904]: E1128 04:09:51.587494     904 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:09:51 default-k8s-diff-port-725962 kubelet[904]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:09:51 default-k8s-diff-port-725962 kubelet[904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:09:51 default-k8s-diff-port-725962 kubelet[904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:10:03 default-k8s-diff-port-725962 kubelet[904]: E1128 04:10:03.572666     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:10:16 default-k8s-diff-port-725962 kubelet[904]: E1128 04:10:16.572259     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:10:28 default-k8s-diff-port-725962 kubelet[904]: E1128 04:10:28.572221     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:10:40 default-k8s-diff-port-725962 kubelet[904]: E1128 04:10:40.572419     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:10:51 default-k8s-diff-port-725962 kubelet[904]: E1128 04:10:51.587459     904 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:10:51 default-k8s-diff-port-725962 kubelet[904]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:10:51 default-k8s-diff-port-725962 kubelet[904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:10:51 default-k8s-diff-port-725962 kubelet[904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:10:53 default-k8s-diff-port-725962 kubelet[904]: E1128 04:10:53.575133     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:11:05 default-k8s-diff-port-725962 kubelet[904]: E1128 04:11:05.573088     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:11:16 default-k8s-diff-port-725962 kubelet[904]: E1128 04:11:16.577146     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	
	* 
	* ==> storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] <==
	* I1128 03:58:31.098141       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 03:58:31.119849       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 03:58:31.120383       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 03:58:48.529149       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 03:58:48.530235       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-725962_01718ff6-75eb-4d16-9ec2-d5670481b48a!
	I1128 03:58:48.531896       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b615493e-abbc-4088-a40d-dcb3a179f972", APIVersion:"v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-725962_01718ff6-75eb-4d16-9ec2-d5670481b48a became leader
	I1128 03:58:48.630632       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-725962_01718ff6-75eb-4d16-9ec2-d5670481b48a!
	
	* 
	* ==> storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] <==
	* I1128 03:58:00.330872       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1128 03:58:30.335991       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-725962 -n default-k8s-diff-port-725962
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-725962 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-9bqg8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-725962 describe pod metrics-server-57f55c9bc5-9bqg8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-725962 describe pod metrics-server-57f55c9bc5-9bqg8: exit status 1 (72.29493ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9bqg8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-725962 describe pod metrics-server-57f55c9bc5-9bqg8: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1128 04:02:55.196170  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 04:03:34.222706  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 04:03:43.673974  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 04:04:18.807449  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 04:04:19.025666  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 04:04:57.272770  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-222348 -n no-preload-222348
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-28 04:11:47.911855892 +0000 UTC m=+5455.086830033
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-222348 -n no-preload-222348
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-222348 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-222348 logs -n 25: (1.365347382s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-644411             | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-222348             | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-725962  | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-666657             | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-666657                              | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC | 28 Nov 23 04:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-644411                  | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-644411 --memory=2200 --alsologtostderr   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 03:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-222348                  | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-725962       | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-644411 sudo                              | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p                                                     | disable-driver-mounts-846967 | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | disable-driver-mounts-846967                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-672176            | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC | 28 Nov 23 03:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-672176                 | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:02:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:02:20.007599  388252 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:02:20.007767  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.007777  388252 out.go:309] Setting ErrFile to fd 2...
	I1128 04:02:20.007785  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.008096  388252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 04:02:20.008843  388252 out.go:303] Setting JSON to false
	I1128 04:02:20.010310  388252 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9890,"bootTime":1701134250,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 04:02:20.010407  388252 start.go:138] virtualization: kvm guest
	I1128 04:02:20.013087  388252 out.go:177] * [embed-certs-672176] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 04:02:20.014598  388252 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:02:20.014660  388252 notify.go:220] Checking for updates...
	I1128 04:02:20.015986  388252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:02:20.017211  388252 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:20.018519  388252 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 04:02:20.019955  388252 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 04:02:20.021210  388252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:02:20.023191  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:02:20.023899  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.023964  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.042617  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
	I1128 04:02:20.043095  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.043705  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.043736  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.044107  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.044324  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.044601  388252 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:02:20.044913  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.044954  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.060572  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I1128 04:02:20.061089  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.061641  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.061662  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.062005  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.062271  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.099905  388252 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 04:02:20.101319  388252 start.go:298] selected driver: kvm2
	I1128 04:02:20.101341  388252 start.go:902] validating driver "kvm2" against &{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.101493  388252 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:02:20.102582  388252 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.102689  388252 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 04:02:20.119550  388252 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 04:02:20.120061  388252 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 04:02:20.120161  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:02:20.120182  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:20.120200  388252 start_flags.go:323] config:
	{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.120453  388252 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.122000  388252 out.go:177] * Starting control plane node embed-certs-672176 in cluster embed-certs-672176
	I1128 04:02:20.123169  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:02:20.123226  388252 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 04:02:20.123238  388252 cache.go:56] Caching tarball of preloaded images
	I1128 04:02:20.123336  388252 preload.go:174] Found /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 04:02:20.123349  388252 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 04:02:20.123483  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:02:20.123764  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:02:20.123841  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 53.317µs
	I1128 04:02:20.123861  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:02:20.123898  388252 fix.go:54] fixHost starting: 
	I1128 04:02:20.124308  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.124355  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.139372  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I1128 04:02:20.139973  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.140502  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.140524  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.141047  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.141273  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.141507  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:02:20.143177  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Running err=<nil>
	W1128 04:02:20.143200  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:02:20.144930  388252 out.go:177] * Updating the running kvm2 "embed-certs-672176" VM ...
	I1128 04:02:17.125019  385277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:17.142364  385277 api_server.go:72] duration metric: took 4m14.849353437s to wait for apiserver process to appear ...
	I1128 04:02:17.142392  385277 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:17.142425  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:17.142480  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:17.183951  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:17.183975  385277 cri.go:89] found id: ""
	I1128 04:02:17.183984  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:17.184035  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.188897  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:17.188968  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:17.224077  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:17.224105  385277 cri.go:89] found id: ""
	I1128 04:02:17.224115  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:17.224171  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.228613  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:17.228693  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:17.263866  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:17.263895  385277 cri.go:89] found id: ""
	I1128 04:02:17.263906  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:17.263973  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.268122  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:17.268187  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:17.311145  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.311176  385277 cri.go:89] found id: ""
	I1128 04:02:17.311185  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:17.311245  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.315277  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:17.315355  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:17.352737  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:17.352763  385277 cri.go:89] found id: ""
	I1128 04:02:17.352773  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:17.352839  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.357033  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:17.357117  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:17.394844  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:17.394880  385277 cri.go:89] found id: ""
	I1128 04:02:17.394892  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:17.394949  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.399309  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:17.399382  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:17.441719  385277 cri.go:89] found id: ""
	I1128 04:02:17.441755  385277 logs.go:284] 0 containers: []
	W1128 04:02:17.441763  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:17.441769  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:17.441821  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:17.485353  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:17.485378  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:17.485383  385277 cri.go:89] found id: ""
	I1128 04:02:17.485391  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:17.485445  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.489781  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.493710  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:17.493734  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:17.552558  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:17.552596  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:17.570454  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:17.570484  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.617817  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:17.617855  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:18.071032  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:18.071076  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:18.188437  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:18.188477  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:18.246729  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:18.246777  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:18.287299  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:18.287345  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:18.324855  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:18.324903  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:18.378328  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:18.378370  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:18.421332  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:18.421375  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:18.467856  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:18.467905  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:18.528763  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:18.528817  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:19.035039  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:21.037085  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:23.535684  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:20.146477  388252 machine.go:88] provisioning docker machine ...
	I1128 04:02:20.146512  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.146758  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.146926  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:02:20.146949  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.147164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:02:20.150346  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.150885  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 04:58:10 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:02:20.150920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.151194  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:02:20.151404  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151768  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:02:20.151998  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:02:20.152482  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:02:20.152501  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:02:23.005224  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:21.087291  385277 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8444/healthz ...
	I1128 04:02:21.094451  385277 api_server.go:279] https://192.168.61.13:8444/healthz returned 200:
	ok
	I1128 04:02:21.096308  385277 api_server.go:141] control plane version: v1.28.4
	I1128 04:02:21.096333  385277 api_server.go:131] duration metric: took 3.953933505s to wait for apiserver health ...
	I1128 04:02:21.096343  385277 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:21.096371  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:21.096431  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:21.144869  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:21.144908  385277 cri.go:89] found id: ""
	I1128 04:02:21.144920  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:21.144987  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.149714  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:21.149790  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:21.192196  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.192230  385277 cri.go:89] found id: ""
	I1128 04:02:21.192242  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:21.192307  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.196964  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:21.197040  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:21.234749  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:21.234775  385277 cri.go:89] found id: ""
	I1128 04:02:21.234785  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:21.234845  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.239486  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:21.239574  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:21.275950  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.275980  385277 cri.go:89] found id: ""
	I1128 04:02:21.275991  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:21.276069  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.280518  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:21.280591  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:21.325941  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:21.325967  385277 cri.go:89] found id: ""
	I1128 04:02:21.325977  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:21.326038  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.330959  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:21.331031  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:21.376605  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.376636  385277 cri.go:89] found id: ""
	I1128 04:02:21.376648  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:21.376717  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.382609  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:21.382686  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:21.434065  385277 cri.go:89] found id: ""
	I1128 04:02:21.434102  385277 logs.go:284] 0 containers: []
	W1128 04:02:21.434113  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:21.434121  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:21.434191  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:21.475230  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.475265  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.475272  385277 cri.go:89] found id: ""
	I1128 04:02:21.475300  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:21.475367  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.479918  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.483989  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:21.484014  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.550040  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:21.550086  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.604802  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:21.604854  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:21.667187  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:21.667230  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:21.735542  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:21.735591  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.778554  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:21.778600  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.841737  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:21.841776  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.885454  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:21.885494  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:22.264498  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:22.264545  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:22.281694  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:22.281727  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:22.441500  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:22.441548  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:22.516971  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:22.517015  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:22.570642  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:22.570676  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:25.123556  385277 system_pods.go:59] 8 kube-system pods found
	I1128 04:02:25.123590  385277 system_pods.go:61] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.123595  385277 system_pods.go:61] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.123600  385277 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.123604  385277 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.123608  385277 system_pods.go:61] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.123613  385277 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.123620  385277 system_pods.go:61] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.123626  385277 system_pods.go:61] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.123633  385277 system_pods.go:74] duration metric: took 4.027284696s to wait for pod list to return data ...
	I1128 04:02:25.123641  385277 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:25.127575  385277 default_sa.go:45] found service account: "default"
	I1128 04:02:25.127601  385277 default_sa.go:55] duration metric: took 3.954108ms for default service account to be created ...
	I1128 04:02:25.127611  385277 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:25.136183  385277 system_pods.go:86] 8 kube-system pods found
	I1128 04:02:25.136217  385277 system_pods.go:89] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.136224  385277 system_pods.go:89] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.136232  385277 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.136240  385277 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.136246  385277 system_pods.go:89] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.136253  385277 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.136266  385277 system_pods.go:89] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.136280  385277 system_pods.go:89] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.136291  385277 system_pods.go:126] duration metric: took 8.673655ms to wait for k8s-apps to be running ...
	I1128 04:02:25.136303  385277 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:25.136362  385277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:25.158811  385277 system_svc.go:56] duration metric: took 22.495299ms WaitForService to wait for kubelet.
	I1128 04:02:25.158862  385277 kubeadm.go:581] duration metric: took 4m22.865858856s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:25.158891  385277 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:25.162679  385277 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:25.162706  385277 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:25.162717  385277 node_conditions.go:105] duration metric: took 3.821419ms to run NodePressure ...
	I1128 04:02:25.162745  385277 start.go:228] waiting for startup goroutines ...
	I1128 04:02:25.162751  385277 start.go:233] waiting for cluster config update ...
	I1128 04:02:25.162760  385277 start.go:242] writing updated cluster config ...
	I1128 04:02:25.163075  385277 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:25.217545  385277 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:02:25.219820  385277 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-725962" cluster and "default" namespace by default
	I1128 04:02:28.624093  385190 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.0
	I1128 04:02:28.624173  385190 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:02:28.624301  385190 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:02:28.624444  385190 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:02:28.624561  385190 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:02:28.624641  385190 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:02:28.626365  385190 out.go:204]   - Generating certificates and keys ...
	I1128 04:02:28.626465  385190 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:02:28.626548  385190 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:02:28.626645  385190 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:02:28.626719  385190 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:02:28.626828  385190 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:02:28.626908  385190 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:02:28.626985  385190 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:02:28.627057  385190 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:02:28.627166  385190 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:02:28.627259  385190 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:02:28.627315  385190 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:02:28.627384  385190 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:02:28.627442  385190 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:02:28.627513  385190 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1128 04:02:28.627573  385190 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:02:28.627653  385190 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:02:28.627717  385190 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:02:28.627821  385190 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:02:28.627901  385190 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:02:28.629387  385190 out.go:204]   - Booting up control plane ...
	I1128 04:02:28.629496  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:02:28.629593  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:02:28.629701  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:02:28.629825  385190 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:02:28.629933  385190 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:02:28.629985  385190 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:02:28.630182  385190 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:02:28.630292  385190 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502940 seconds
	I1128 04:02:28.630437  385190 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:02:28.630586  385190 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:02:28.630656  385190 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:02:28.630869  385190 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-222348 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:02:28.630937  385190 kubeadm.go:322] [bootstrap-token] Using token: 7e8qc3.nnytwd8q8fl84l6i
	I1128 04:02:28.632838  385190 out.go:204]   - Configuring RBAC rules ...
	I1128 04:02:28.632987  385190 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:02:28.633108  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:02:28.633273  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:02:28.633455  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:02:28.633635  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:02:28.633737  385190 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:02:28.633909  385190 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:02:28.633964  385190 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:02:28.634003  385190 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:02:28.634009  385190 kubeadm.go:322] 
	I1128 04:02:28.634063  385190 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:02:28.634070  385190 kubeadm.go:322] 
	I1128 04:02:28.634130  385190 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:02:28.634136  385190 kubeadm.go:322] 
	I1128 04:02:28.634157  385190 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:02:28.634205  385190 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:02:28.634250  385190 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:02:28.634256  385190 kubeadm.go:322] 
	I1128 04:02:28.634333  385190 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:02:28.634349  385190 kubeadm.go:322] 
	I1128 04:02:28.634438  385190 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:02:28.634462  385190 kubeadm.go:322] 
	I1128 04:02:28.634525  385190 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:02:28.634659  385190 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:02:28.634759  385190 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:02:28.634773  385190 kubeadm.go:322] 
	I1128 04:02:28.634879  385190 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:02:28.634957  385190 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:02:28.634965  385190 kubeadm.go:322] 
	I1128 04:02:28.635041  385190 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635153  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:02:28.635188  385190 kubeadm.go:322] 	--control-plane 
	I1128 04:02:28.635197  385190 kubeadm.go:322] 
	I1128 04:02:28.635304  385190 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:02:28.635313  385190 kubeadm.go:322] 
	I1128 04:02:28.635411  385190 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635541  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:02:28.635574  385190 cni.go:84] Creating CNI manager for ""
	I1128 04:02:28.635588  385190 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:28.637435  385190 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:02:28.638928  385190 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:02:25.536491  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:28.037478  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:26.077199  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:28.654704  385190 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:02:28.714435  385190 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:02:28.714516  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.714524  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=no-preload-222348 minikube.k8s.io/updated_at=2023_11_28T04_02_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.790761  385190 ops.go:34] apiserver oom_adj: -16
	I1128 04:02:28.965788  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.082351  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.680586  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.181037  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.680560  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.181252  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.680411  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.180401  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.681195  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:33.180867  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.535026  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.536808  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.161184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:33.680538  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.180615  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.680359  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.180746  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.681099  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.180588  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.681059  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.180397  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.680629  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:38.180710  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.036694  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:37.535611  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:35.229145  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:38.681268  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.180491  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.680634  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.180761  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.681057  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.180983  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.309439  385190 kubeadm.go:1081] duration metric: took 12.594981015s to wait for elevateKubeSystemPrivileges.
	I1128 04:02:41.309479  385190 kubeadm.go:406] StartCluster complete in 5m13.943228432s
	I1128 04:02:41.309503  385190 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.309588  385190 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:41.311897  385190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.312215  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:02:41.312322  385190 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:02:41.312407  385190 addons.go:69] Setting storage-provisioner=true in profile "no-preload-222348"
	I1128 04:02:41.312422  385190 addons.go:69] Setting default-storageclass=true in profile "no-preload-222348"
	I1128 04:02:41.312436  385190 addons.go:231] Setting addon storage-provisioner=true in "no-preload-222348"
	I1128 04:02:41.312438  385190 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-222348"
	W1128 04:02:41.312445  385190 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:02:41.312446  385190 addons.go:69] Setting metrics-server=true in profile "no-preload-222348"
	I1128 04:02:41.312462  385190 config.go:182] Loaded profile config "no-preload-222348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 04:02:41.312475  385190 addons.go:231] Setting addon metrics-server=true in "no-preload-222348"
	W1128 04:02:41.312485  385190 addons.go:240] addon metrics-server should already be in state true
	I1128 04:02:41.312510  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312537  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312960  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312985  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.328695  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I1128 04:02:41.328709  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44013
	I1128 04:02:41.328795  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I1128 04:02:41.332632  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332652  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332640  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.333191  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333213  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333323  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333340  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333358  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333344  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333610  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333774  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333826  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.334168  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334182  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.334399  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.334587  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334602  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.338095  385190 addons.go:231] Setting addon default-storageclass=true in "no-preload-222348"
	W1128 04:02:41.338117  385190 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:02:41.338150  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.338562  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.338582  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.351757  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I1128 04:02:41.352462  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.353001  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.353018  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.353432  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.353689  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.354246  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1128 04:02:41.354837  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.355324  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.355342  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.355772  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.356535  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.356577  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.356832  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33321
	I1128 04:02:41.357390  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.357499  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.359297  385190 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:02:41.357865  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.360511  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.360704  385190 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.360715  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:02:41.360729  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.361075  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.361268  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.363830  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.365783  385190 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:02:41.364607  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.365384  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.367315  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:02:41.367328  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:02:41.367348  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.367398  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.367414  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.367426  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.368068  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.368272  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.370196  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370716  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.370740  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370820  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.371038  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.371144  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.371280  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.374445  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1128 04:02:41.374734  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.375079  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.375089  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.375305  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.375403  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.376672  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.376916  385190 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.376931  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:02:41.376944  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.379448  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379800  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.379839  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379946  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.380070  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.380154  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.380223  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.388696  385190 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-222348" context rescaled to 1 replicas
	I1128 04:02:41.388733  385190 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:02:41.390613  385190 out.go:177] * Verifying Kubernetes components...
	I1128 04:02:41.391975  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:41.644941  385190 node_ready.go:35] waiting up to 6m0s for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.645100  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:02:41.665031  385190 node_ready.go:49] node "no-preload-222348" has status "Ready":"True"
	I1128 04:02:41.665067  385190 node_ready.go:38] duration metric: took 20.088639ms waiting for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.665082  385190 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:41.682673  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:41.759560  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:02:41.759595  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:02:41.905887  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.922496  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.955296  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:02:41.955331  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:02:42.013986  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.014023  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:02:42.104936  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.373507  385190 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 04:02:43.023075  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117131952s)
	I1128 04:02:43.023099  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.100573063s)
	I1128 04:02:43.023137  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023153  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023217  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023235  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023471  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023491  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023502  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023510  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023615  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023659  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023682  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023704  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023724  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023704  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023898  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023917  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.116124  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.116162  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.116591  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.116636  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.116648  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.309617  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.204630924s)
	I1128 04:02:43.309676  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.309689  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310010  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310031  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310043  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.310051  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310313  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310331  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310343  385190 addons.go:467] Verifying addon metrics-server=true in "no-preload-222348"
	I1128 04:02:43.312005  385190 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 04:02:43.313519  385190 addons.go:502] enable addons completed in 2.001198411s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 04:02:39.536572  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:42.036107  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:41.309196  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:44.385117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:43.735794  385190 pod_ready.go:102] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:45.228427  385190 pod_ready.go:92] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.228457  385190 pod_ready.go:81] duration metric: took 3.545740844s waiting for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.228470  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234714  385190 pod_ready.go:92] pod "coredns-76f75df574-nxnkf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.234747  385190 pod_ready.go:81] duration metric: took 6.268663ms waiting for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234767  385190 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240363  385190 pod_ready.go:92] pod "etcd-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.240386  385190 pod_ready.go:81] duration metric: took 5.606452ms waiting for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240397  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245748  385190 pod_ready.go:92] pod "kube-apiserver-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.245774  385190 pod_ready.go:81] duration metric: took 5.367922ms waiting for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245786  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251475  385190 pod_ready.go:92] pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.251498  385190 pod_ready.go:81] duration metric: took 5.703821ms waiting for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251506  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050247  385190 pod_ready.go:92] pod "kube-proxy-2cf7h" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.050276  385190 pod_ready.go:81] duration metric: took 798.763018ms waiting for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050285  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448834  385190 pod_ready.go:92] pod "kube-scheduler-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.448860  385190 pod_ready.go:81] duration metric: took 398.568611ms waiting for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448867  385190 pod_ready.go:38] duration metric: took 4.783773086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:46.448903  385190 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:02:46.448956  385190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:46.462941  385190 api_server.go:72] duration metric: took 5.074163925s to wait for apiserver process to appear ...
	I1128 04:02:46.463051  385190 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:46.463074  385190 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I1128 04:02:46.467657  385190 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I1128 04:02:46.468866  385190 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 04:02:46.468903  385190 api_server.go:131] duration metric: took 5.843376ms to wait for apiserver health ...
	I1128 04:02:46.468913  385190 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:46.655554  385190 system_pods.go:59] 9 kube-system pods found
	I1128 04:02:46.655587  385190 system_pods.go:61] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:46.655591  385190 system_pods.go:61] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:46.655595  385190 system_pods.go:61] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:46.655600  385190 system_pods.go:61] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:46.655605  385190 system_pods.go:61] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:46.655608  385190 system_pods.go:61] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:46.655612  385190 system_pods.go:61] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:46.655619  385190 system_pods.go:61] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:46.655623  385190 system_pods.go:61] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:46.655631  385190 system_pods.go:74] duration metric: took 186.709524ms to wait for pod list to return data ...
	I1128 04:02:46.655640  385190 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:46.849175  385190 default_sa.go:45] found service account: "default"
	I1128 04:02:46.849211  385190 default_sa.go:55] duration metric: took 193.561736ms for default service account to be created ...
	I1128 04:02:46.849224  385190 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:47.053165  385190 system_pods.go:86] 9 kube-system pods found
	I1128 04:02:47.053196  385190 system_pods.go:89] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:47.053202  385190 system_pods.go:89] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:47.053206  385190 system_pods.go:89] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:47.053210  385190 system_pods.go:89] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:47.053215  385190 system_pods.go:89] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:47.053219  385190 system_pods.go:89] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:47.053223  385190 system_pods.go:89] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:47.053230  385190 system_pods.go:89] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:47.053234  385190 system_pods.go:89] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:47.053244  385190 system_pods.go:126] duration metric: took 204.014035ms to wait for k8s-apps to be running ...
	I1128 04:02:47.053258  385190 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:47.053305  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:47.067411  385190 system_svc.go:56] duration metric: took 14.14274ms WaitForService to wait for kubelet.
	I1128 04:02:47.067436  385190 kubeadm.go:581] duration metric: took 5.678670521s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:47.067453  385190 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:47.249281  385190 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:47.249314  385190 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:47.249327  385190 node_conditions.go:105] duration metric: took 181.869484ms to run NodePressure ...
	I1128 04:02:47.249343  385190 start.go:228] waiting for startup goroutines ...
	I1128 04:02:47.249351  385190 start.go:233] waiting for cluster config update ...
	I1128 04:02:47.249363  385190 start.go:242] writing updated cluster config ...
	I1128 04:02:47.249683  385190 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:47.301859  385190 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.0 (minor skew: 1)
	I1128 04:02:47.304215  385190 out.go:177] * Done! kubectl is now configured to use "no-preload-222348" cluster and "default" namespace by default
	I1128 04:02:44.036258  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:46.535320  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:49.035723  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:51.036414  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.538606  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.501130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:56.038018  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:58.038148  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:56.573082  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:00.535454  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.536429  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.657139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:05.035677  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:07.535352  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:05.725166  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:10.035343  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:11.229133  384793 pod_ready.go:81] duration metric: took 4m0.000747713s waiting for pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:11.229186  384793 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 04:03:11.229223  384793 pod_ready.go:38] duration metric: took 4m1.198355321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:11.229295  384793 kubeadm.go:640] restartCluster took 5m7.227749733s
	W1128 04:03:11.229381  384793 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 04:03:11.229418  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 04:03:11.809110  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:14.877214  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:17.718633  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.489183339s)
	I1128 04:03:17.718715  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:17.739229  384793 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:03:17.757193  384793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:03:17.767831  384793 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:03:17.767891  384793 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1128 04:03:17.992007  384793 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:03:20.961191  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:24.033147  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:31.044187  384793 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1128 04:03:31.044276  384793 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:03:31.044375  384793 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:03:31.044493  384793 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:03:31.044609  384793 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:03:31.044732  384793 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:03:31.044843  384793 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:03:31.044947  384793 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1128 04:03:31.045000  384793 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:03:31.046699  384793 out.go:204]   - Generating certificates and keys ...
	I1128 04:03:31.046809  384793 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:03:31.046903  384793 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:03:31.047016  384793 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:03:31.047101  384793 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:03:31.047160  384793 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:03:31.047208  384793 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:03:31.047264  384793 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:03:31.047314  384793 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:03:31.047377  384793 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:03:31.047482  384793 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:03:31.047529  384793 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:03:31.047578  384793 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:03:31.047620  384793 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:03:31.047694  384793 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:03:31.047788  384793 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:03:31.047884  384793 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:03:31.047988  384793 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:03:31.049345  384793 out.go:204]   - Booting up control plane ...
	I1128 04:03:31.049473  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:03:31.049569  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:03:31.049662  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:03:31.049788  384793 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:03:31.049994  384793 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:03:31.050107  384793 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503287 seconds
	I1128 04:03:31.050234  384793 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:03:31.050420  384793 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:03:31.050527  384793 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:03:31.050654  384793 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-666657 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1128 04:03:31.050713  384793 kubeadm.go:322] [bootstrap-token] Using token: gf7r1p.pbcguwte29lkqg9w
	I1128 04:03:31.052000  384793 out.go:204]   - Configuring RBAC rules ...
	I1128 04:03:31.052092  384793 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:03:31.052210  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:03:31.052320  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:03:31.052413  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:03:31.052483  384793 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:03:31.052536  384793 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:03:31.052597  384793 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:03:31.052606  384793 kubeadm.go:322] 
	I1128 04:03:31.052674  384793 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:03:31.052686  384793 kubeadm.go:322] 
	I1128 04:03:31.052781  384793 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:03:31.052797  384793 kubeadm.go:322] 
	I1128 04:03:31.052818  384793 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:03:31.052928  384793 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:03:31.052973  384793 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:03:31.052982  384793 kubeadm.go:322] 
	I1128 04:03:31.053023  384793 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:03:31.053088  384793 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:03:31.053143  384793 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:03:31.053150  384793 kubeadm.go:322] 
	I1128 04:03:31.053220  384793 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1128 04:03:31.053286  384793 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:03:31.053292  384793 kubeadm.go:322] 
	I1128 04:03:31.053381  384793 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053534  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:03:31.053573  384793 kubeadm.go:322]     --control-plane 	  
	I1128 04:03:31.053582  384793 kubeadm.go:322] 
	I1128 04:03:31.053693  384793 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:03:31.053705  384793 kubeadm.go:322] 
	I1128 04:03:31.053806  384793 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053946  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:03:31.053966  384793 cni.go:84] Creating CNI manager for ""
	I1128 04:03:31.053976  384793 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:03:31.055505  384793 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:03:31.057142  384793 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:03:31.079411  384793 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:03:31.115893  384793 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:03:31.115971  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.115980  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=old-k8s-version-666657 minikube.k8s.io/updated_at=2023_11_28T04_03_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.155887  384793 ops.go:34] apiserver oom_adj: -16
	I1128 04:03:31.372659  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.491129  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.099198  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.598840  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.099309  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.599526  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:30.109176  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:33.181170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:34.099192  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:34.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.098837  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.599080  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.098595  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.599209  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.099078  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.599225  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.099115  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.599148  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.261149  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:39.099036  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.599363  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.099099  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.598700  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.099170  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.599370  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.099044  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.098743  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.599233  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.333168  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:44.099079  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:44.598797  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.098959  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.598648  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.098995  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.301569  384793 kubeadm.go:1081] duration metric: took 15.185662789s to wait for elevateKubeSystemPrivileges.
	I1128 04:03:46.301619  384793 kubeadm.go:406] StartCluster complete in 5m42.369662329s
	I1128 04:03:46.301646  384793 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.301755  384793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:03:46.304463  384793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.304778  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:03:46.304778  384793 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:03:46.304867  384793 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304898  384793 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304910  384793 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-666657"
	I1128 04:03:46.304911  384793 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-666657"
	W1128 04:03:46.304920  384793 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:03:46.304927  384793 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-666657"
	W1128 04:03:46.304935  384793 addons.go:240] addon metrics-server should already be in state true
	I1128 04:03:46.304934  384793 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-666657"
	I1128 04:03:46.304987  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.304988  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.305001  384793 config.go:182] Loaded profile config "old-k8s-version-666657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 04:03:46.305394  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305427  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305454  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305429  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305395  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305694  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.322961  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33891
	I1128 04:03:46.322979  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I1128 04:03:46.323376  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323388  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323820  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I1128 04:03:46.323904  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.323916  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324071  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324086  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324273  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.324410  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324528  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324590  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.324704  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324711  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.325059  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.325278  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325304  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.325499  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325519  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.328349  384793 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-666657"
	W1128 04:03:46.328365  384793 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:03:46.328393  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.328731  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.328750  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.342280  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45973
	I1128 04:03:46.343025  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.343737  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.343759  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.344269  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.344492  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.345036  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39033
	I1128 04:03:46.345665  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.346273  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.346301  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.346384  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.348493  384793 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:03:46.346866  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.349948  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:03:46.349966  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:03:46.349989  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.350099  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.352330  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.352432  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36429
	I1128 04:03:46.354071  384793 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:03:46.352959  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.354459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355328  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.355358  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355480  384793 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.355501  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:03:46.355518  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.355216  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.355803  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.356414  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.356435  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.356917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.357018  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.357108  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.357738  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.357769  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.358467  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.358922  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.358946  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.359072  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.359282  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.359403  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.359610  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.373628  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1128 04:03:46.374105  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.374866  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.374895  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.375314  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.375548  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.377265  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.377561  384793 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.377582  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:03:46.377603  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.380459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.380834  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.380864  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.381016  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.381169  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.381359  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.381466  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.409792  384793 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-666657" context rescaled to 1 replicas
	I1128 04:03:46.409842  384793 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:03:46.411454  384793 out.go:177] * Verifying Kubernetes components...
	I1128 04:03:46.413194  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:46.586767  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.631269  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.634383  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:03:46.634407  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:03:46.666152  384793 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.666176  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:03:46.674225  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:03:46.674248  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:03:46.713431  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:46.713461  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:03:46.793657  384793 node_ready.go:49] node "old-k8s-version-666657" has status "Ready":"True"
	I1128 04:03:46.793685  384793 node_ready.go:38] duration metric: took 127.497314ms waiting for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.793695  384793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:46.793699  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:47.263395  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:47.404099  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404139  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404445  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404485  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.404487  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:47.404506  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404519  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404786  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404809  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434537  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.434567  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.434929  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.434986  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434965  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.447368  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.816042626s)
	I1128 04:03:48.447386  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.781104735s)
	I1128 04:03:48.447415  384793 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1128 04:03:48.447423  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.447803  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.447818  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.447828  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447836  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.448143  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.448144  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.448166  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.746828  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.953085214s)
	I1128 04:03:48.746898  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.746917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747352  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.747378  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.747396  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.747420  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.747437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747692  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.749007  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.749027  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.749045  384793 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-666657"
	I1128 04:03:48.750820  384793 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 04:03:48.417150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:48.752378  384793 addons.go:502] enable addons completed in 2.447603022s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 04:03:49.504435  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.973968  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.485111  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:53.973462  384793 pod_ready.go:92] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.973491  384793 pod_ready.go:81] duration metric: took 6.710064476s waiting for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.973504  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.975383  384793 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975413  384793 pod_ready.go:81] duration metric: took 1.901164ms waiting for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:53.975426  384793 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975437  384793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980213  384793 pod_ready.go:92] pod "kube-proxy-fpjnf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.980239  384793 pod_ready.go:81] duration metric: took 4.79365ms waiting for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980249  384793 pod_ready.go:38] duration metric: took 7.186544585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:53.980270  384793 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:03:53.980322  384793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:03:53.995392  384793 api_server.go:72] duration metric: took 7.585507425s to wait for apiserver process to appear ...
	I1128 04:03:53.995438  384793 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:03:53.995455  384793 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I1128 04:03:54.002840  384793 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I1128 04:03:54.003953  384793 api_server.go:141] control plane version: v1.16.0
	I1128 04:03:54.003972  384793 api_server.go:131] duration metric: took 8.527968ms to wait for apiserver health ...
	I1128 04:03:54.003980  384793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:03:54.008155  384793 system_pods.go:59] 4 kube-system pods found
	I1128 04:03:54.008179  384793 system_pods.go:61] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.008184  384793 system_pods.go:61] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.008192  384793 system_pods.go:61] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.008196  384793 system_pods.go:61] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.008202  384793 system_pods.go:74] duration metric: took 4.21636ms to wait for pod list to return data ...
	I1128 04:03:54.008209  384793 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:03:54.010577  384793 default_sa.go:45] found service account: "default"
	I1128 04:03:54.010597  384793 default_sa.go:55] duration metric: took 2.383201ms for default service account to be created ...
	I1128 04:03:54.010603  384793 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:03:54.014085  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.014107  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.014114  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.014121  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.014125  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.014142  384793 retry.go:31] will retry after 305.81254ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.325645  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.325690  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.325700  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.325711  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.325717  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.325737  384793 retry.go:31] will retry after 265.004483ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.596427  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.596465  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.596472  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.596483  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.596491  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.596515  384793 retry.go:31] will retry after 379.763313ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.981569  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.981599  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.981607  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.981617  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.981624  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.981646  384793 retry.go:31] will retry after 439.396023ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.426531  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.426560  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.426565  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.426572  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.426577  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.426593  384793 retry.go:31] will retry after 551.563469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.983013  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.983042  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.983048  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.983055  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.983060  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.983076  384793 retry.go:31] will retry after 647.414701ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:56.635207  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:56.635238  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:56.635243  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:56.635251  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:56.635256  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:56.635276  384793 retry.go:31] will retry after 1.037316769s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.678748  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:57.678791  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:57.678800  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:57.678810  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:57.678815  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:57.678836  384793 retry.go:31] will retry after 1.167348672s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.565155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:58.851584  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:58.851615  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:58.851621  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:58.851627  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:58.851632  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:58.851649  384793 retry.go:31] will retry after 1.37796567s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.235244  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:00.235270  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:00.235276  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:00.235282  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:00.235288  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:00.235313  384793 retry.go:31] will retry after 2.090359712s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:02.330947  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:02.330984  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:02.331002  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:02.331013  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:02.331020  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:02.331041  384793 retry.go:31] will retry after 2.451255186s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.637193  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:04.787969  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:04.787999  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:04.788004  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:04.788011  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:04.788016  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:04.788033  384793 retry.go:31] will retry after 2.859833817s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:07.653629  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:07.653661  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:07.653667  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:07.653674  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:07.653679  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:07.653697  384793 retry.go:31] will retry after 4.226694897s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:06.721130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:09.789162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:11.886456  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:11.886488  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:11.886496  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:11.886503  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:11.886508  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:11.886538  384793 retry.go:31] will retry after 4.177038986s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:16.069291  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:16.069324  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:16.069330  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:16.069336  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:16.069341  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:16.069359  384793 retry.go:31] will retry after 4.273733761s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:15.869195  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:18.945228  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:20.347960  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:20.347992  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:20.347998  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:20.348004  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:20.348009  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:20.348028  384793 retry.go:31] will retry after 6.790786839s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:27.147442  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:27.147481  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:27.147489  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:27.147496  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Pending
	I1128 04:04:27.147506  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:27.147513  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:27.147532  384793 retry.go:31] will retry after 7.530763623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:25.021154  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:28.093157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.177177  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.684745  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:34.684783  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:34.684792  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:34.684799  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:34.684807  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:34.684813  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:34.684835  384793 retry.go:31] will retry after 10.243202989s: missing components: etcd, kube-apiserver, kube-controller-manager
	I1128 04:04:37.245170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:43.325131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:44.935423  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:04:44.935456  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:44.935462  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:04:44.935469  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Pending
	I1128 04:04:44.935474  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Pending
	I1128 04:04:44.935480  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:44.935486  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:44.935493  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:44.935498  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:44.935517  384793 retry.go:31] will retry after 15.895769684s: missing components: kube-apiserver, kube-controller-manager
	I1128 04:04:46.397235  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:52.481117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:55.549226  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:00.839171  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:05:00.839203  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:05:00.839209  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:05:00.839213  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Running
	I1128 04:05:00.839217  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Running
	I1128 04:05:00.839221  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:05:00.839225  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:05:00.839231  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:05:00.839236  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:05:00.839245  384793 system_pods.go:126] duration metric: took 1m6.828635432s to wait for k8s-apps to be running ...
	I1128 04:05:00.839253  384793 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:05:00.839308  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:05:00.858602  384793 system_svc.go:56] duration metric: took 19.336447ms WaitForService to wait for kubelet.
	I1128 04:05:00.858640  384793 kubeadm.go:581] duration metric: took 1m14.448764188s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:05:00.858663  384793 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:05:00.862657  384793 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:05:00.862682  384793 node_conditions.go:123] node cpu capacity is 2
	I1128 04:05:00.862695  384793 node_conditions.go:105] duration metric: took 4.026622ms to run NodePressure ...
	I1128 04:05:00.862709  384793 start.go:228] waiting for startup goroutines ...
	I1128 04:05:00.862721  384793 start.go:233] waiting for cluster config update ...
	I1128 04:05:00.862736  384793 start.go:242] writing updated cluster config ...
	I1128 04:05:00.863037  384793 ssh_runner.go:195] Run: rm -f paused
	I1128 04:05:00.914674  384793 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1128 04:05:00.916795  384793 out.go:177] 
	W1128 04:05:00.918292  384793 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1128 04:05:00.919711  384793 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1128 04:05:00.921263  384793 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-666657" cluster and "default" namespace by default
	I1128 04:05:01.629125  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:04.701205  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:10.781216  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:13.853213  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:19.933127  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:23.005456  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:29.085157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:32.161103  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:38.237107  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:41.313150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:47.389244  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:50.461131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:56.541162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:59.613200  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:05.693144  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:08.765184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:14.845161  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:17.921139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:23.997190  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:27.069225  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:33.149188  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:36.221163  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:42.301167  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:45.373156  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:51.453155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:54.525189  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:57.526358  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:06:57.526408  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:06:57.528448  388252 machine.go:91] provisioned docker machine in 4m37.381939051s
	I1128 04:06:57.528492  388252 fix.go:56] fixHost completed within 4m37.404595738s
	I1128 04:06:57.528498  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 4m37.404645524s
	W1128 04:06:57.528514  388252 start.go:691] error starting host: provision: host is not running
	W1128 04:06:57.528751  388252 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1128 04:06:57.528762  388252 start.go:706] Will try again in 5 seconds ...
	I1128 04:07:02.528995  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:07:02.529144  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 79.815µs
	I1128 04:07:02.529172  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:07:02.529180  388252 fix.go:54] fixHost starting: 
	I1128 04:07:02.529654  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:07:02.529689  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:07:02.545443  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I1128 04:07:02.546041  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:07:02.546627  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:07:02.546657  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:07:02.547002  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:07:02.547202  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:02.547393  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:07:02.549209  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Stopped err=<nil>
	I1128 04:07:02.549234  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	W1128 04:07:02.549378  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:07:02.551250  388252 out.go:177] * Restarting existing kvm2 VM for "embed-certs-672176" ...
	I1128 04:07:02.552611  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Start
	I1128 04:07:02.552792  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring networks are active...
	I1128 04:07:02.553615  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network default is active
	I1128 04:07:02.553928  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network mk-embed-certs-672176 is active
	I1128 04:07:02.554371  388252 main.go:141] libmachine: (embed-certs-672176) Getting domain xml...
	I1128 04:07:02.555218  388252 main.go:141] libmachine: (embed-certs-672176) Creating domain...
	I1128 04:07:03.867073  388252 main.go:141] libmachine: (embed-certs-672176) Waiting to get IP...
	I1128 04:07:03.868115  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:03.868595  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:03.868706  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:03.868567  389161 retry.go:31] will retry after 306.367802ms: waiting for machine to come up
	I1128 04:07:04.176148  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.176727  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.176760  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.176665  389161 retry.go:31] will retry after 349.820346ms: waiting for machine to come up
	I1128 04:07:04.528319  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.528804  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.528830  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.528753  389161 retry.go:31] will retry after 434.816613ms: waiting for machine to come up
	I1128 04:07:04.965453  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.965931  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.965964  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.965859  389161 retry.go:31] will retry after 504.812349ms: waiting for machine to come up
	I1128 04:07:05.472644  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.473150  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.473181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.473089  389161 retry.go:31] will retry after 512.859795ms: waiting for machine to come up
	I1128 04:07:05.987622  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.988077  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.988101  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.988023  389161 retry.go:31] will retry after 578.673806ms: waiting for machine to come up
	I1128 04:07:06.568420  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:06.568923  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:06.568957  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:06.568863  389161 retry.go:31] will retry after 1.101477644s: waiting for machine to come up
	I1128 04:07:07.671698  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:07.672126  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:07.672156  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:07.672054  389161 retry.go:31] will retry after 1.379684082s: waiting for machine to come up
	I1128 04:07:09.053227  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:09.053918  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:09.053950  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:09.053851  389161 retry.go:31] will retry after 1.775284772s: waiting for machine to come up
	I1128 04:07:10.831571  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:10.832140  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:10.832177  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:10.832065  389161 retry.go:31] will retry after 2.005203426s: waiting for machine to come up
	I1128 04:07:12.838667  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:12.839159  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:12.839187  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:12.839113  389161 retry.go:31] will retry after 2.403192486s: waiting for machine to come up
	I1128 04:07:15.244005  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:15.244513  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:15.244553  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:15.244427  389161 retry.go:31] will retry after 2.329820043s: waiting for machine to come up
	I1128 04:07:17.576268  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:17.576707  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:17.576748  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:17.576652  389161 retry.go:31] will retry after 4.220303586s: waiting for machine to come up
	I1128 04:07:21.801976  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802441  388252 main.go:141] libmachine: (embed-certs-672176) Found IP for machine: 192.168.72.208
	I1128 04:07:21.802469  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has current primary IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802483  388252 main.go:141] libmachine: (embed-certs-672176) Reserving static IP address...
	I1128 04:07:21.802890  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.802920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | skip adding static IP to network mk-embed-certs-672176 - found existing host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"}
	I1128 04:07:21.802939  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Getting to WaitForSSH function...
	I1128 04:07:21.802955  388252 main.go:141] libmachine: (embed-certs-672176) Reserved static IP address: 192.168.72.208
	I1128 04:07:21.802967  388252 main.go:141] libmachine: (embed-certs-672176) Waiting for SSH to be available...
	I1128 04:07:21.805675  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806052  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.806086  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806212  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH client type: external
	I1128 04:07:21.806237  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH private key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa (-rw-------)
	I1128 04:07:21.806261  388252 main.go:141] libmachine: (embed-certs-672176) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 04:07:21.806272  388252 main.go:141] libmachine: (embed-certs-672176) DBG | About to run SSH command:
	I1128 04:07:21.806284  388252 main.go:141] libmachine: (embed-certs-672176) DBG | exit 0
	I1128 04:07:21.897047  388252 main.go:141] libmachine: (embed-certs-672176) DBG | SSH cmd err, output: <nil>: 
	I1128 04:07:21.897443  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetConfigRaw
	I1128 04:07:21.898164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:21.901014  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901421  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.901454  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901679  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:07:21.901872  388252 machine.go:88] provisioning docker machine ...
	I1128 04:07:21.901891  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:21.902121  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902304  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:07:21.902318  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902482  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:21.905282  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905757  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.905798  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905977  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:21.906187  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906383  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906565  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:21.906734  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:21.907224  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:21.907254  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:07:22.042525  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-672176
	
	I1128 04:07:22.042553  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.045516  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.045916  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.045961  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.046143  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.046353  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046526  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046676  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.046861  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.047186  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.047207  388252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-672176' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-672176/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-672176' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:07:22.179515  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:07:22.179552  388252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 04:07:22.179578  388252 buildroot.go:174] setting up certificates
	I1128 04:07:22.179591  388252 provision.go:83] configureAuth start
	I1128 04:07:22.179602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:22.179940  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:22.182782  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183167  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.183199  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183344  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.185770  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186158  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.186195  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186348  388252 provision.go:138] copyHostCerts
	I1128 04:07:22.186407  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 04:07:22.186418  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 04:07:22.186494  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 04:07:22.186609  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 04:07:22.186623  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 04:07:22.186658  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 04:07:22.186756  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 04:07:22.186772  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 04:07:22.186830  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 04:07:22.186915  388252 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.embed-certs-672176 san=[192.168.72.208 192.168.72.208 localhost 127.0.0.1 minikube embed-certs-672176]
	I1128 04:07:22.268178  388252 provision.go:172] copyRemoteCerts
	I1128 04:07:22.268250  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:07:22.268305  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.270816  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271152  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.271181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271382  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.271571  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.271730  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.271880  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.362340  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 04:07:22.387591  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 04:07:22.412169  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 04:07:22.437185  388252 provision.go:86] duration metric: configureAuth took 257.574597ms
	I1128 04:07:22.437223  388252 buildroot.go:189] setting minikube options for container-runtime
	I1128 04:07:22.437418  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:07:22.437496  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.440503  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.440937  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.440984  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.441148  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.441414  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441626  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441808  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.442043  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.442369  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.442386  388252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:07:22.778314  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:07:22.778344  388252 machine.go:91] provisioned docker machine in 876.457785ms
	I1128 04:07:22.778392  388252 start.go:300] post-start starting for "embed-certs-672176" (driver="kvm2")
	I1128 04:07:22.778413  388252 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:07:22.778463  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:22.778894  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:07:22.778934  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.781750  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782161  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.782203  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782336  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.782653  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.782870  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.783045  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.876530  388252 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:07:22.881442  388252 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 04:07:22.881472  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 04:07:22.881541  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 04:07:22.881618  388252 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 04:07:22.881701  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:07:22.891393  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:22.914734  388252 start.go:303] post-start completed in 136.316733ms
	I1128 04:07:22.914771  388252 fix.go:56] fixHost completed within 20.385588986s
	I1128 04:07:22.914800  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.917856  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918267  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.918301  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918449  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.918697  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.918898  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.919051  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.919230  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.919548  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.919561  388252 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 04:07:23.037790  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701144442.982632661
	
	I1128 04:07:23.037817  388252 fix.go:206] guest clock: 1701144442.982632661
	I1128 04:07:23.037828  388252 fix.go:219] Guest: 2023-11-28 04:07:22.982632661 +0000 UTC Remote: 2023-11-28 04:07:22.914776935 +0000 UTC m=+302.972189005 (delta=67.855726ms)
	I1128 04:07:23.037853  388252 fix.go:190] guest clock delta is within tolerance: 67.855726ms
	I1128 04:07:23.037860  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 20.508701455s
	I1128 04:07:23.037879  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.038196  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:23.040928  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041276  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.041309  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041473  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042009  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042217  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042315  388252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:07:23.042380  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.042447  388252 ssh_runner.go:195] Run: cat /version.json
	I1128 04:07:23.042479  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.045070  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045430  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.045459  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045478  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045634  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.045826  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.045987  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.045998  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.046020  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.046131  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.046197  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.046338  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.046455  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.046594  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.158653  388252 ssh_runner.go:195] Run: systemctl --version
	I1128 04:07:23.164496  388252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:07:23.313946  388252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 04:07:23.320220  388252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 04:07:23.320326  388252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:07:23.339262  388252 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 04:07:23.339296  388252 start.go:472] detecting cgroup driver to use...
	I1128 04:07:23.339401  388252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:07:23.352989  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:07:23.367735  388252 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:07:23.367797  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:07:23.382143  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:07:23.395983  388252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 04:07:23.513475  388252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:07:23.657449  388252 docker.go:219] disabling docker service ...
	I1128 04:07:23.657531  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:07:23.672662  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:07:23.685142  388252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:07:23.810404  388252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:07:23.929413  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:07:23.942971  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:07:23.961419  388252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 04:07:23.961493  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.971562  388252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 04:07:23.971643  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.981660  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.992472  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:24.002748  388252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 04:07:24.016234  388252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 04:07:24.025560  388252 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 04:07:24.025629  388252 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 04:07:24.039085  388252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 04:07:24.048324  388252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 04:07:24.160507  388252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 04:07:24.331205  388252 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 04:07:24.331292  388252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 04:07:24.336480  388252 start.go:540] Will wait 60s for crictl version
	I1128 04:07:24.336541  388252 ssh_runner.go:195] Run: which crictl
	I1128 04:07:24.341052  388252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 04:07:24.376784  388252 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 04:07:24.376910  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.425035  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.485230  388252 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 04:07:24.486822  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:24.490127  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490529  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:24.490558  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490733  388252 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1128 04:07:24.494881  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:24.510006  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:07:24.510097  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:24.549615  388252 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 04:07:24.549699  388252 ssh_runner.go:195] Run: which lz4
	I1128 04:07:24.554039  388252 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 04:07:24.558068  388252 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 04:07:24.558101  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 04:07:26.358503  388252 crio.go:444] Took 1.804493 seconds to copy over tarball
	I1128 04:07:26.358586  388252 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 04:07:29.679041  388252 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.320417818s)
	I1128 04:07:29.679072  388252 crio.go:451] Took 3.320535 seconds to extract the tarball
	I1128 04:07:29.679086  388252 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 04:07:29.723905  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:29.774544  388252 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 04:07:29.774574  388252 cache_images.go:84] Images are preloaded, skipping loading
	I1128 04:07:29.774683  388252 ssh_runner.go:195] Run: crio config
	I1128 04:07:29.841740  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:29.841767  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:29.841792  388252 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 04:07:29.841826  388252 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-672176 NodeName:embed-certs-672176 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 04:07:29.842004  388252 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-672176"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 04:07:29.842115  388252 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-672176 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 04:07:29.842184  388252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 04:07:29.854017  388252 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 04:07:29.854103  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 04:07:29.863871  388252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1128 04:07:29.880656  388252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 04:07:29.899138  388252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1128 04:07:29.919697  388252 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I1128 04:07:29.924087  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:29.936814  388252 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176 for IP: 192.168.72.208
	I1128 04:07:29.936851  388252 certs.go:190] acquiring lock for shared ca certs: {Name:mk57c0483467fb0022a439f1b546194ca653d1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:07:29.937053  388252 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key
	I1128 04:07:29.937097  388252 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key
	I1128 04:07:29.937198  388252 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/client.key
	I1128 04:07:29.937274  388252 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key.9e96c9f0
	I1128 04:07:29.937334  388252 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key
	I1128 04:07:29.937491  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem (1338 bytes)
	W1128 04:07:29.937524  388252 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515_empty.pem, impossibly tiny 0 bytes
	I1128 04:07:29.937535  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 04:07:29.937561  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem (1078 bytes)
	I1128 04:07:29.937586  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem (1123 bytes)
	I1128 04:07:29.937607  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem (1675 bytes)
	I1128 04:07:29.937698  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:29.938553  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 04:07:29.963444  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 04:07:29.988035  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 04:07:30.012981  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 04:07:30.219926  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 04:07:30.244077  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 04:07:30.268833  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 04:07:30.293921  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 04:07:30.322839  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /usr/share/ca-certificates/3405152.pem (1708 bytes)
	I1128 04:07:30.349783  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 04:07:30.374569  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem --> /usr/share/ca-certificates/340515.pem (1338 bytes)
	I1128 04:07:30.401804  388252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 04:07:30.420925  388252 ssh_runner.go:195] Run: openssl version
	I1128 04:07:30.427193  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3405152.pem && ln -fs /usr/share/ca-certificates/3405152.pem /etc/ssl/certs/3405152.pem"
	I1128 04:07:30.439369  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444359  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444455  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.451032  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3405152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 04:07:30.464110  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 04:07:30.477275  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483239  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483314  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.489884  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 04:07:30.501967  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340515.pem && ln -fs /usr/share/ca-certificates/340515.pem /etc/ssl/certs/340515.pem"
	I1128 04:07:30.514081  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519079  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519157  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.525194  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340515.pem /etc/ssl/certs/51391683.0"
	I1128 04:07:30.536594  388252 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 04:07:30.541041  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 04:07:30.547008  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 04:07:30.554317  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 04:07:30.561063  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 04:07:30.567355  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 04:07:30.573719  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 04:07:30.580010  388252 kubeadm.go:404] StartCluster: {Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:07:30.580166  388252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 04:07:30.580237  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:30.623908  388252 cri.go:89] found id: ""
	I1128 04:07:30.623980  388252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 04:07:30.635847  388252 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 04:07:30.635911  388252 kubeadm.go:636] restartCluster start
	I1128 04:07:30.635982  388252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 04:07:30.646523  388252 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.647648  388252 kubeconfig.go:92] found "embed-certs-672176" server: "https://192.168.72.208:8443"
	I1128 04:07:30.650037  388252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 04:07:30.660625  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.660703  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.674234  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.674258  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.674309  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.687276  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.188012  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.188122  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.201481  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.688057  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.688152  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.701564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.188188  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.201049  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.688113  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.688191  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.700824  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.187399  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.187517  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.200128  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.687562  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.687688  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.700564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.188276  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.188406  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.201686  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.688327  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.688426  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.701023  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.187672  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.187809  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.200598  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.688485  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.688565  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.701518  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.188131  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.188213  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.201708  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.688321  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.688430  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.701852  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.187395  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.187539  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.200267  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.688365  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.688447  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.701921  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.187456  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.187615  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.201388  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.687819  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.687933  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.700584  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.188195  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.201557  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.688192  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.688268  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.700990  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.187806  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:40.187918  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:40.201110  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.660853  388252 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 04:07:40.660908  388252 kubeadm.go:1128] stopping kube-system containers ...
	I1128 04:07:40.660926  388252 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 04:07:40.661008  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:40.706945  388252 cri.go:89] found id: ""
	I1128 04:07:40.707017  388252 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 04:07:40.724988  388252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:07:40.735077  388252 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:07:40.735165  388252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745110  388252 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745146  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:40.870777  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:41.851187  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.047008  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.129329  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.194986  388252 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:07:42.195074  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.210225  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.727622  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.227063  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.726928  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.227709  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.727790  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.756952  388252 api_server.go:72] duration metric: took 2.561964065s to wait for apiserver process to appear ...
	I1128 04:07:44.756989  388252 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:07:44.757011  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.757778  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:44.757838  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.758268  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:45.258785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.416741  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.416771  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.416785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.484252  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.484292  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.758607  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.765159  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:49.765189  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.258770  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.264464  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.264499  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.759164  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.765206  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.765246  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:51.258591  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:51.264758  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I1128 04:07:51.274077  388252 api_server.go:141] control plane version: v1.28.4
	I1128 04:07:51.274110  388252 api_server.go:131] duration metric: took 6.517112692s to wait for apiserver health ...
	I1128 04:07:51.274122  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:51.274130  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:51.276088  388252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:07:51.277582  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:07:51.302050  388252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:07:51.355400  388252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:07:51.371543  388252 system_pods.go:59] 8 kube-system pods found
	I1128 04:07:51.371592  388252 system_pods.go:61] "coredns-5dd5756b68-296l9" [a79e060e-b757-46b9-882e-5f065aed0f46] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 04:07:51.371605  388252 system_pods.go:61] "etcd-embed-certs-672176" [610938df-5b75-4fef-b632-19af73d74dab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 04:07:51.371623  388252 system_pods.go:61] "kube-apiserver-embed-certs-672176" [3e513b84-29f4-4285-aea3-963078fa9e74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 04:07:51.371633  388252 system_pods.go:61] "kube-controller-manager-embed-certs-672176" [6fb9a912-0c05-47d1-8420-26d0bbbe92c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 04:07:51.371640  388252 system_pods.go:61] "kube-proxy-4cvwh" [9882c0aa-5c66-4b53-8c8e-827c1cddaac5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 04:07:51.371652  388252 system_pods.go:61] "kube-scheduler-embed-certs-672176" [2d7c706d-f01b-4e80-ba35-8ef97f27faa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 04:07:51.371659  388252 system_pods.go:61] "metrics-server-57f55c9bc5-sbkpc" [ea558db5-2aab-4e1e-aa62-a4595172d108] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:07:51.371666  388252 system_pods.go:61] "storage-provisioner" [96737dd7-931e-4ac5-b662-c560a4b6642e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 04:07:51.371676  388252 system_pods.go:74] duration metric: took 16.247766ms to wait for pod list to return data ...
	I1128 04:07:51.371694  388252 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:07:51.376458  388252 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:07:51.376495  388252 node_conditions.go:123] node cpu capacity is 2
	I1128 04:07:51.376508  388252 node_conditions.go:105] duration metric: took 4.80925ms to run NodePressure ...
	I1128 04:07:51.376539  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:51.778110  388252 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 04:07:51.786916  388252 kubeadm.go:787] kubelet initialised
	I1128 04:07:51.787002  388252 kubeadm.go:788] duration metric: took 8.859672ms waiting for restarted kubelet to initialise ...
	I1128 04:07:51.787019  388252 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:07:51.799380  388252 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.807214  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807261  388252 pod_ready.go:81] duration metric: took 7.829357ms waiting for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.807274  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807299  388252 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.814516  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814550  388252 pod_ready.go:81] duration metric: took 7.235029ms waiting for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.814569  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814576  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.827729  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827759  388252 pod_ready.go:81] duration metric: took 13.172422ms waiting for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.827768  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827774  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:54.190842  388252 pod_ready.go:102] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:07:56.189656  388252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.189758  388252 pod_ready.go:81] duration metric: took 4.36196703s waiting for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.189779  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196462  388252 pod_ready.go:92] pod "kube-proxy-4cvwh" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.196503  388252 pod_ready.go:81] duration metric: took 6.707028ms waiting for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196517  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:58.590819  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:00.590953  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:02.595296  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:04.592801  388252 pod_ready.go:92] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:08:04.592826  388252 pod_ready.go:81] duration metric: took 8.396301174s waiting for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:04.592839  388252 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:06.618794  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:08.619204  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:11.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:13.618160  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:15.619404  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:17.620107  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:20.118789  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:22.119626  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:24.619088  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:26.619353  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:29.118548  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:31.118625  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:33.122964  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:35.620077  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:38.118800  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:40.618996  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:42.619252  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:45.118801  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:47.118987  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:49.619233  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:52.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:54.120044  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:56.619768  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:59.119321  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:01.119784  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:03.619289  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:06.119695  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:08.618767  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:10.620952  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:13.119086  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:15.121912  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:17.618200  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:19.619428  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:22.117316  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:24.118147  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:26.119945  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:28.619687  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:30.619772  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:33.118414  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:35.622173  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:38.118091  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:40.118723  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:42.119551  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:44.119931  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:46.619572  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:48.620898  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:51.118343  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:53.619215  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:56.119440  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:58.620299  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:01.118313  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:03.618615  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:05.619056  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:07.622475  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:10.117858  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:12.119468  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:14.619203  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:16.619540  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:19.118749  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:21.619618  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:23.620623  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:26.118183  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:28.118246  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:30.618282  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:33.117841  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:35.122904  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:37.619116  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:40.118304  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:42.618264  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:44.621653  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:47.119733  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:49.618284  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:51.619099  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:54.118728  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:56.121041  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:58.618237  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:00.619430  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:03.119263  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:05.619558  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:07.620571  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:10.117924  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:12.118001  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:14.119916  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:16.618621  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:18.620149  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:21.118296  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:23.118614  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:25.119100  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:27.120549  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:29.618264  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:32.119075  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:34.619939  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:37.119561  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:39.119896  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:41.617842  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:43.618594  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 03:56:57 UTC, ends at Tue 2023-11-28 04:11:48 UTC. --
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.731614900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dfc352f1-e96e-43d3-9d9a-d904b59bcb96 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.731858725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1957c12842b67439cf0fd2c8e6621ba2313b2ed1176bd562fcdfe9ca237e80b3,PodSandboxId:df72b36aadcf86ce69f73f311171efc2a1b4f48c3464932afba203d12db583f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701144164680918480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cf7h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbbfab4-753c-4925-9154-27a19052567a,},Annotations:map[string]string{io.kubernetes.container.hash: 7c242387,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850958f2fb6eb8d5bc32a7fe0b9286cf09a1787673ed0cc9dd96ee1eac0636bf,PodSandboxId:7940165f0057b45e3d32cc89cde399384640c76ed775ba4e21d198e99ee9f64b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701144164573598275,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37152287-4d4b-45db-a357-1468fc210bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc09e7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03135efda90532612c38c0353c67b59d1316a9173bb00c795a5437b198f81aa0,PodSandboxId:e56c380450d2377f41e918a7fd14471071a2ca2defeeeaaf2cf87db1e72faf24,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701144163821646144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kqgf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c63dad72-b046-4f33-b851-8ca60c237dd7,},Annotations:map[string]string{io.kubernetes.container.hash: b4feeb0b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:510892e048714ae0c99171fbd0aac85698eaa61741069edb01085a22bdcc9ac2,PodSandboxId:86e73488f5313d0ee2ebde20476582937e1ca9cef523aad278cdaa8028a9c846,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701144140625840021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6618416a9d62cdf9f0f3c0e83b58685f,},Annotations:map[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6544adf0def62fe77964c6e9e5b7c3b3e91408bed82ccc7ab9c53d397c9f769a,PodSandboxId:b6b101c20ad2a682a5607a93453575695f3e165aeac889d56cdc20ffd730a153,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701144140310038318,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35a8e2360c9fa006fd620573f15a218,},Annotations:map
[string]string{io.kubernetes.container.hash: 1a4b2524,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf7aa04e4dffc949886bb6d2b41dcb22da8affefee453973bf3ab390bef6943,PodSandboxId:7587ef7ab319935a952a759c1d4cf83b358408573ddeb4d4f7de916100d42941,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701144140137449225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5749f01db5a8e0f1bb15715c6
2c91664,},Annotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db05ce5a1b14eda3cf86a731b007a7a0315f0b4e6e0049f18b063f74a9fb9b7,PodSandboxId:a8f4db4a98220bdc2ff2b384292d6a434af6764be032721ce3cba474609b18f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701144139927862984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d58547c5472ad0261e3309d4e4dda4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 347dae6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dfc352f1-e96e-43d3-9d9a-d904b59bcb96 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.801545222Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=22e0cafa-1553-469b-ba65-d3e332dd60d5 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.801642808Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=22e0cafa-1553-469b-ba65-d3e332dd60d5 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.802832382Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1d8da725-ec4b-42fa-a3e8-c9c1242b4bdb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.803213701Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701144708803194431,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=1d8da725-ec4b-42fa-a3e8-c9c1242b4bdb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.804350387Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b9a9c16b-15a7-4cdb-a180-7f17795deba0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.805504071Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b9a9c16b-15a7-4cdb-a180-7f17795deba0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.805823694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1957c12842b67439cf0fd2c8e6621ba2313b2ed1176bd562fcdfe9ca237e80b3,PodSandboxId:df72b36aadcf86ce69f73f311171efc2a1b4f48c3464932afba203d12db583f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701144164680918480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cf7h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbbfab4-753c-4925-9154-27a19052567a,},Annotations:map[string]string{io.kubernetes.container.hash: 7c242387,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850958f2fb6eb8d5bc32a7fe0b9286cf09a1787673ed0cc9dd96ee1eac0636bf,PodSandboxId:7940165f0057b45e3d32cc89cde399384640c76ed775ba4e21d198e99ee9f64b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701144164573598275,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37152287-4d4b-45db-a357-1468fc210bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc09e7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03135efda90532612c38c0353c67b59d1316a9173bb00c795a5437b198f81aa0,PodSandboxId:e56c380450d2377f41e918a7fd14471071a2ca2defeeeaaf2cf87db1e72faf24,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701144163821646144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kqgf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c63dad72-b046-4f33-b851-8ca60c237dd7,},Annotations:map[string]string{io.kubernetes.container.hash: b4feeb0b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:510892e048714ae0c99171fbd0aac85698eaa61741069edb01085a22bdcc9ac2,PodSandboxId:86e73488f5313d0ee2ebde20476582937e1ca9cef523aad278cdaa8028a9c846,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701144140625840021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6618416a9d62cdf9f0f3c0e83b58685f,},Annotations:map[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6544adf0def62fe77964c6e9e5b7c3b3e91408bed82ccc7ab9c53d397c9f769a,PodSandboxId:b6b101c20ad2a682a5607a93453575695f3e165aeac889d56cdc20ffd730a153,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701144140310038318,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35a8e2360c9fa006fd620573f15a218,},Annotations:map
[string]string{io.kubernetes.container.hash: 1a4b2524,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf7aa04e4dffc949886bb6d2b41dcb22da8affefee453973bf3ab390bef6943,PodSandboxId:7587ef7ab319935a952a759c1d4cf83b358408573ddeb4d4f7de916100d42941,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701144140137449225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5749f01db5a8e0f1bb15715c6
2c91664,},Annotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db05ce5a1b14eda3cf86a731b007a7a0315f0b4e6e0049f18b063f74a9fb9b7,PodSandboxId:a8f4db4a98220bdc2ff2b384292d6a434af6764be032721ce3cba474609b18f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701144139927862984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d58547c5472ad0261e3309d4e4dda4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 347dae6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b9a9c16b-15a7-4cdb-a180-7f17795deba0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.849003037Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=70dd0794-917a-45ab-8eac-7d87ca890c7d name=/runtime.v1.RuntimeService/Version
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.849064537Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=70dd0794-917a-45ab-8eac-7d87ca890c7d name=/runtime.v1.RuntimeService/Version
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.850487282Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=e0d108a7-00b7-42d0-bb08-2266cf0ba119 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.850950918Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701144708850931964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=e0d108a7-00b7-42d0-bb08-2266cf0ba119 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.851960162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c94da658-34e5-490e-88b6-4d27f69be507 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.852009798Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c94da658-34e5-490e-88b6-4d27f69be507 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.852188199Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1957c12842b67439cf0fd2c8e6621ba2313b2ed1176bd562fcdfe9ca237e80b3,PodSandboxId:df72b36aadcf86ce69f73f311171efc2a1b4f48c3464932afba203d12db583f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701144164680918480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cf7h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbbfab4-753c-4925-9154-27a19052567a,},Annotations:map[string]string{io.kubernetes.container.hash: 7c242387,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850958f2fb6eb8d5bc32a7fe0b9286cf09a1787673ed0cc9dd96ee1eac0636bf,PodSandboxId:7940165f0057b45e3d32cc89cde399384640c76ed775ba4e21d198e99ee9f64b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701144164573598275,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37152287-4d4b-45db-a357-1468fc210bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc09e7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03135efda90532612c38c0353c67b59d1316a9173bb00c795a5437b198f81aa0,PodSandboxId:e56c380450d2377f41e918a7fd14471071a2ca2defeeeaaf2cf87db1e72faf24,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701144163821646144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kqgf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c63dad72-b046-4f33-b851-8ca60c237dd7,},Annotations:map[string]string{io.kubernetes.container.hash: b4feeb0b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:510892e048714ae0c99171fbd0aac85698eaa61741069edb01085a22bdcc9ac2,PodSandboxId:86e73488f5313d0ee2ebde20476582937e1ca9cef523aad278cdaa8028a9c846,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701144140625840021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6618416a9d62cdf9f0f3c0e83b58685f,},Annotations:map[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6544adf0def62fe77964c6e9e5b7c3b3e91408bed82ccc7ab9c53d397c9f769a,PodSandboxId:b6b101c20ad2a682a5607a93453575695f3e165aeac889d56cdc20ffd730a153,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701144140310038318,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35a8e2360c9fa006fd620573f15a218,},Annotations:map
[string]string{io.kubernetes.container.hash: 1a4b2524,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf7aa04e4dffc949886bb6d2b41dcb22da8affefee453973bf3ab390bef6943,PodSandboxId:7587ef7ab319935a952a759c1d4cf83b358408573ddeb4d4f7de916100d42941,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701144140137449225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5749f01db5a8e0f1bb15715c6
2c91664,},Annotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db05ce5a1b14eda3cf86a731b007a7a0315f0b4e6e0049f18b063f74a9fb9b7,PodSandboxId:a8f4db4a98220bdc2ff2b384292d6a434af6764be032721ce3cba474609b18f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701144139927862984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d58547c5472ad0261e3309d4e4dda4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 347dae6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c94da658-34e5-490e-88b6-4d27f69be507 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.884031644Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=8c53e3ea-5462-49e7-8651-f120486d4125 name=/runtime.v1.RuntimeService/Status
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.884135999Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=8c53e3ea-5462-49e7-8651-f120486d4125 name=/runtime.v1.RuntimeService/Status
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.894167684Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3d5e4b9b-6848-4479-92cc-ccf860a76836 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.894243379Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3d5e4b9b-6848-4479-92cc-ccf860a76836 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.895651116Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=42a8ce3d-1d75-4d25-b3bb-1ba38659f4ae name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.896418892Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701144708896396889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=42a8ce3d-1d75-4d25-b3bb-1ba38659f4ae name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.898019892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b5fd1a4e-ba74-490e-8d1a-66e6044ffe48 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.898068994Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b5fd1a4e-ba74-490e-8d1a-66e6044ffe48 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:11:48 no-preload-222348 crio[717]: time="2023-11-28 04:11:48.898246212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1957c12842b67439cf0fd2c8e6621ba2313b2ed1176bd562fcdfe9ca237e80b3,PodSandboxId:df72b36aadcf86ce69f73f311171efc2a1b4f48c3464932afba203d12db583f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701144164680918480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cf7h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbbfab4-753c-4925-9154-27a19052567a,},Annotations:map[string]string{io.kubernetes.container.hash: 7c242387,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850958f2fb6eb8d5bc32a7fe0b9286cf09a1787673ed0cc9dd96ee1eac0636bf,PodSandboxId:7940165f0057b45e3d32cc89cde399384640c76ed775ba4e21d198e99ee9f64b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701144164573598275,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37152287-4d4b-45db-a357-1468fc210bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc09e7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03135efda90532612c38c0353c67b59d1316a9173bb00c795a5437b198f81aa0,PodSandboxId:e56c380450d2377f41e918a7fd14471071a2ca2defeeeaaf2cf87db1e72faf24,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701144163821646144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kqgf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c63dad72-b046-4f33-b851-8ca60c237dd7,},Annotations:map[string]string{io.kubernetes.container.hash: b4feeb0b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:510892e048714ae0c99171fbd0aac85698eaa61741069edb01085a22bdcc9ac2,PodSandboxId:86e73488f5313d0ee2ebde20476582937e1ca9cef523aad278cdaa8028a9c846,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701144140625840021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6618416a9d62cdf9f0f3c0e83b58685f,},Annotations:map[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6544adf0def62fe77964c6e9e5b7c3b3e91408bed82ccc7ab9c53d397c9f769a,PodSandboxId:b6b101c20ad2a682a5607a93453575695f3e165aeac889d56cdc20ffd730a153,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701144140310038318,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35a8e2360c9fa006fd620573f15a218,},Annotations:map
[string]string{io.kubernetes.container.hash: 1a4b2524,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf7aa04e4dffc949886bb6d2b41dcb22da8affefee453973bf3ab390bef6943,PodSandboxId:7587ef7ab319935a952a759c1d4cf83b358408573ddeb4d4f7de916100d42941,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701144140137449225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5749f01db5a8e0f1bb15715c6
2c91664,},Annotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db05ce5a1b14eda3cf86a731b007a7a0315f0b4e6e0049f18b063f74a9fb9b7,PodSandboxId:a8f4db4a98220bdc2ff2b384292d6a434af6764be032721ce3cba474609b18f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701144139927862984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d58547c5472ad0261e3309d4e4dda4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 347dae6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b5fd1a4e-ba74-490e-8d1a-66e6044ffe48 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1957c12842b67       df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55   9 minutes ago       Running             kube-proxy                0                   df72b36aadcf8       kube-proxy-2cf7h
	850958f2fb6eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   7940165f0057b       storage-provisioner
	03135efda9053       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   e56c380450d23       coredns-76f75df574-kqgf5
	510892e048714       4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9   9 minutes ago       Running             kube-scheduler            2                   86e73488f5313       kube-scheduler-no-preload-222348
	6544adf0def62       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   9 minutes ago       Running             etcd                      2                   b6b101c20ad2a       etcd-no-preload-222348
	7cf7aa04e4dff       e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4   9 minutes ago       Running             kube-controller-manager   2                   7587ef7ab3199       kube-controller-manager-no-preload-222348
	3db05ce5a1b14       e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7   9 minutes ago       Running             kube-apiserver            2                   a8f4db4a98220       kube-apiserver-no-preload-222348
	
	* 
	* ==> coredns [03135efda90532612c38c0353c67b59d1316a9173bb00c795a5437b198f81aa0] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:56943 - 39680 "HINFO IN 3995391236530009408.8397340726467799273. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036086635s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-222348
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-222348
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=no-preload-222348
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T04_02_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 04:02:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-222348
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 04:11:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 04:07:55 +0000   Tue, 28 Nov 2023 04:02:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 04:07:55 +0000   Tue, 28 Nov 2023 04:02:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 04:07:55 +0000   Tue, 28 Nov 2023 04:02:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 04:07:55 +0000   Tue, 28 Nov 2023 04:02:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    no-preload-222348
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a49fea7b2a8c47519b7ba0d73fbaad30
	  System UUID:                a49fea7b-2a8c-4751-9b7b-a0d73fbaad30
	  Boot ID:                    b22808a2-5e4c-467c-b657-05f3e0a0861b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.0
	  Kube-Proxy Version:         v1.29.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-kqgf5                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-no-preload-222348                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-no-preload-222348             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-no-preload-222348    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-2cf7h                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-no-preload-222348             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 metrics-server-57f55c9bc5-kl8k4              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m7s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m30s (x8 over 9m31s)  kubelet          Node no-preload-222348 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m30s (x8 over 9m31s)  kubelet          Node no-preload-222348 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m30s (x7 over 9m31s)  kubelet          Node no-preload-222348 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s                  kubelet          Node no-preload-222348 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s                  kubelet          Node no-preload-222348 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s                  kubelet          Node no-preload-222348 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m9s                   node-controller  Node no-preload-222348 event: Registered Node no-preload-222348 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov28 03:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.080691] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.493450] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.466422] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139852] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.439098] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Nov28 03:57] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.120544] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.140641] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.122158] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.255896] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +30.732579] systemd-fstab-generator[1331]: Ignoring "noauto" for root device
	[ +20.868516] kauditd_printk_skb: 29 callbacks suppressed
	[Nov28 04:02] systemd-fstab-generator[3952]: Ignoring "noauto" for root device
	[  +9.790081] systemd-fstab-generator[4279]: Ignoring "noauto" for root device
	[ +13.282813] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.313267] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [6544adf0def62fe77964c6e9e5b7c3b3e91408bed82ccc7ab9c53d397c9f769a] <==
	* {"level":"info","ts":"2023-11-28T04:02:22.672954Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"133f99d1dc1797cc","initial-advertise-peer-urls":["https://192.168.39.106:2380"],"listen-peer-urls":["https://192.168.39.106:2380"],"advertise-client-urls":["https://192.168.39.106:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.106:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-28T04:02:22.673059Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-28T04:02:22.672902Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2023-11-28T04:02:22.67391Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.106:2380"}
	{"level":"info","ts":"2023-11-28T04:02:23.603865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-28T04:02:23.603948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-28T04:02:23.604006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgPreVoteResp from 133f99d1dc1797cc at term 1"}
	{"level":"info","ts":"2023-11-28T04:02:23.604023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became candidate at term 2"}
	{"level":"info","ts":"2023-11-28T04:02:23.604032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgVoteResp from 133f99d1dc1797cc at term 2"}
	{"level":"info","ts":"2023-11-28T04:02:23.604043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became leader at term 2"}
	{"level":"info","ts":"2023-11-28T04:02:23.604054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 133f99d1dc1797cc elected leader 133f99d1dc1797cc at term 2"}
	{"level":"info","ts":"2023-11-28T04:02:23.605849Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:02:23.606902Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"133f99d1dc1797cc","local-member-attributes":"{Name:no-preload-222348 ClientURLs:[https://192.168.39.106:2379]}","request-path":"/0/members/133f99d1dc1797cc/attributes","cluster-id":"db63b0e3647a827","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-28T04:02:23.607156Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T04:02:23.607575Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"db63b0e3647a827","local-member-id":"133f99d1dc1797cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:02:23.60768Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:02:23.607873Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:02:23.607927Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T04:02:23.609907Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T04:02:23.609957Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-28T04:02:23.610037Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T04:02:23.611538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.106:2379"}
	{"level":"info","ts":"2023-11-28T04:07:31.512819Z","caller":"traceutil/trace.go:171","msg":"trace[482128868] transaction","detail":"{read_only:false; response_revision:730; number_of_response:1; }","duration":"227.376137ms","start":"2023-11-28T04:07:31.285284Z","end":"2023-11-28T04:07:31.51266Z","steps":["trace[482128868] 'process raft request'  (duration: 226.559304ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T04:07:31.754533Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.358376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-28T04:07:31.75483Z","caller":"traceutil/trace.go:171","msg":"trace[1868309745] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:730; }","duration":"133.782256ms","start":"2023-11-28T04:07:31.62102Z","end":"2023-11-28T04:07:31.754802Z","steps":["trace[1868309745] 'range keys from in-memory index tree'  (duration: 133.213828ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  04:11:49 up 15 min,  0 users,  load average: 0.14, 0.35, 0.30
	Linux no-preload-222348 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [3db05ce5a1b14eda3cf86a731b007a7a0315f0b4e6e0049f18b063f74a9fb9b7] <==
	* I1128 04:05:43.847182       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:07:25.079330       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:07:25.079700       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1128 04:07:26.080008       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:07:26.080191       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:07:26.080244       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:07:26.080356       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:07:26.080418       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:07:26.082365       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:08:26.080620       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:08:26.080798       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:08:26.080815       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:08:26.083193       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:08:26.083250       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:08:26.083260       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:10:26.081861       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:10:26.082217       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:10:26.082255       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:10:26.084037       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:10:26.084094       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:10:26.084103       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [7cf7aa04e4dffc949886bb6d2b41dcb22da8affefee453973bf3ab390bef6943] <==
	* I1128 04:06:10.663463       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:06:40.204180       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:06:40.678036       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:07:10.210526       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:07:10.687339       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:07:40.220642       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:07:40.696125       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:08:10.228123       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:08:10.705065       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1128 04:08:38.664555       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="351.492µs"
	E1128 04:08:40.235355       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:08:40.713886       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1128 04:08:51.659172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="325.233µs"
	E1128 04:09:10.241161       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:09:10.724845       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:09:40.249264       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:09:40.734058       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:10:10.256160       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:10:10.744341       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:10:40.262031       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:10:40.753915       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:11:10.268954       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:11:10.763261       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:11:40.275589       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:11:40.773361       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [1957c12842b67439cf0fd2c8e6621ba2313b2ed1176bd562fcdfe9ca237e80b3] <==
	* I1128 04:02:45.031087       1 server_others.go:72] "Using iptables proxy"
	I1128 04:02:45.054305       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	I1128 04:02:45.102085       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1128 04:02:45.102161       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 04:02:45.102204       1 server_others.go:168] "Using iptables Proxier"
	I1128 04:02:45.105947       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 04:02:45.106202       1 server.go:865] "Version info" version="v1.29.0-rc.0"
	I1128 04:02:45.106251       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 04:02:45.107304       1 config.go:188] "Starting service config controller"
	I1128 04:02:45.107358       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 04:02:45.107390       1 config.go:97] "Starting endpoint slice config controller"
	I1128 04:02:45.107407       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 04:02:45.109636       1 config.go:315] "Starting node config controller"
	I1128 04:02:45.109678       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 04:02:45.208456       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1128 04:02:45.208587       1 shared_informer.go:318] Caches are synced for service config
	I1128 04:02:45.210028       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [510892e048714ae0c99171fbd0aac85698eaa61741069edb01085a22bdcc9ac2] <==
	* W1128 04:02:25.972683       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 04:02:25.972797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1128 04:02:26.001091       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 04:02:26.001228       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1128 04:02:26.105121       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 04:02:26.105216       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1128 04:02:26.154812       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 04:02:26.154979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1128 04:02:26.203193       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 04:02:26.203291       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1128 04:02:26.274042       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 04:02:26.274150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1128 04:02:26.301635       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 04:02:26.301804       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1128 04:02:26.315044       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 04:02:26.315160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1128 04:02:26.336899       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1128 04:02:26.337034       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1128 04:02:26.375984       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1128 04:02:26.376077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1128 04:02:26.405027       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 04:02:26.405117       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1128 04:02:26.578548       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 04:02:26.578658       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1128 04:02:29.201072       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 03:56:57 UTC, ends at Tue 2023-11-28 04:11:49 UTC. --
	Nov 28 04:09:04 no-preload-222348 kubelet[4286]: E1128 04:09:04.644426    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:09:18 no-preload-222348 kubelet[4286]: E1128 04:09:18.643802    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:09:28 no-preload-222348 kubelet[4286]: E1128 04:09:28.665922    4286 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:09:28 no-preload-222348 kubelet[4286]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:09:28 no-preload-222348 kubelet[4286]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:09:28 no-preload-222348 kubelet[4286]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:09:32 no-preload-222348 kubelet[4286]: E1128 04:09:32.643016    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:09:46 no-preload-222348 kubelet[4286]: E1128 04:09:46.643845    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:10:00 no-preload-222348 kubelet[4286]: E1128 04:10:00.645957    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:10:14 no-preload-222348 kubelet[4286]: E1128 04:10:14.647028    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:10:25 no-preload-222348 kubelet[4286]: E1128 04:10:25.643809    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:10:28 no-preload-222348 kubelet[4286]: E1128 04:10:28.665857    4286 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:10:28 no-preload-222348 kubelet[4286]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:10:28 no-preload-222348 kubelet[4286]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:10:28 no-preload-222348 kubelet[4286]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:10:40 no-preload-222348 kubelet[4286]: E1128 04:10:40.643106    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:10:53 no-preload-222348 kubelet[4286]: E1128 04:10:53.643175    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:11:05 no-preload-222348 kubelet[4286]: E1128 04:11:05.643486    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:11:17 no-preload-222348 kubelet[4286]: E1128 04:11:17.643190    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:11:28 no-preload-222348 kubelet[4286]: E1128 04:11:28.668148    4286 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:11:28 no-preload-222348 kubelet[4286]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:11:28 no-preload-222348 kubelet[4286]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:11:28 no-preload-222348 kubelet[4286]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:11:31 no-preload-222348 kubelet[4286]: E1128 04:11:31.643265    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:11:45 no-preload-222348 kubelet[4286]: E1128 04:11:45.643314    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	
	* 
	* ==> storage-provisioner [850958f2fb6eb8d5bc32a7fe0b9286cf09a1787673ed0cc9dd96ee1eac0636bf] <==
	* I1128 04:02:44.843305       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 04:02:44.899093       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 04:02:44.899188       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 04:02:44.914589       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 04:02:44.915687       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-222348_45a8d0c4-2d08-4313-84e0-658422aad263!
	I1128 04:02:44.915394       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ac131277-f0b2-4398-b830-9b6c80a229fd", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-222348_45a8d0c4-2d08-4313-84e0-658422aad263 became leader
	I1128 04:02:45.017072       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-222348_45a8d0c4-2d08-4313-84e0-658422aad263!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-222348 -n no-preload-222348
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-222348 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-kl8k4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-222348 describe pod metrics-server-57f55c9bc5-kl8k4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-222348 describe pod metrics-server-57f55c9bc5-kl8k4: exit status 1 (72.210018ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-kl8k4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-222348 describe pod metrics-server-57f55c9bc5-kl8k4: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1128 04:05:10.258172  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 04:05:41.853037  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 04:05:42.071451  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 04:06:17.839497  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 04:06:23.484289  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 04:06:33.303880  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 04:06:58.569013  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 04:07:05.903581  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 04:07:40.886185  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 04:07:46.532049  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 04:07:55.195532  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 04:08:21.614490  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 04:08:28.948484  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 04:08:34.223012  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 04:08:43.673952  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 04:09:18.240085  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 04:09:18.806977  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 04:09:19.025372  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 04:10:10.257543  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 04:11:17.839229  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 04:11:23.483928  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-666657 -n old-k8s-version-666657
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-28 04:14:01.546331601 +0000 UTC m=+5588.721305738
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666657 -n old-k8s-version-666657
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-666657 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-666657 logs -n 25: (1.402444239s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-644411             | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-222348             | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-725962  | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-666657             | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-666657                              | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC | 28 Nov 23 04:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-644411                  | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-644411 --memory=2200 --alsologtostderr   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 03:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-222348                  | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-725962       | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-644411 sudo                              | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p                                                     | disable-driver-mounts-846967 | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | disable-driver-mounts-846967                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-672176            | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC | 28 Nov 23 03:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-672176                 | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC | 28 Nov 23 04:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:02:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:02:20.007599  388252 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:02:20.007767  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.007777  388252 out.go:309] Setting ErrFile to fd 2...
	I1128 04:02:20.007785  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.008096  388252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 04:02:20.008843  388252 out.go:303] Setting JSON to false
	I1128 04:02:20.010310  388252 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9890,"bootTime":1701134250,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 04:02:20.010407  388252 start.go:138] virtualization: kvm guest
	I1128 04:02:20.013087  388252 out.go:177] * [embed-certs-672176] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 04:02:20.014598  388252 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:02:20.014660  388252 notify.go:220] Checking for updates...
	I1128 04:02:20.015986  388252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:02:20.017211  388252 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:20.018519  388252 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 04:02:20.019955  388252 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 04:02:20.021210  388252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:02:20.023191  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:02:20.023899  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.023964  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.042617  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
	I1128 04:02:20.043095  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.043705  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.043736  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.044107  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.044324  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.044601  388252 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:02:20.044913  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.044954  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.060572  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I1128 04:02:20.061089  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.061641  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.061662  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.062005  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.062271  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.099905  388252 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 04:02:20.101319  388252 start.go:298] selected driver: kvm2
	I1128 04:02:20.101341  388252 start.go:902] validating driver "kvm2" against &{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.101493  388252 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:02:20.102582  388252 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.102689  388252 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 04:02:20.119550  388252 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 04:02:20.120061  388252 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 04:02:20.120161  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:02:20.120182  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:20.120200  388252 start_flags.go:323] config:
	{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.120453  388252 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.122000  388252 out.go:177] * Starting control plane node embed-certs-672176 in cluster embed-certs-672176
	I1128 04:02:20.123169  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:02:20.123226  388252 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 04:02:20.123238  388252 cache.go:56] Caching tarball of preloaded images
	I1128 04:02:20.123336  388252 preload.go:174] Found /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 04:02:20.123349  388252 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 04:02:20.123483  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:02:20.123764  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:02:20.123841  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 53.317µs
	I1128 04:02:20.123861  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:02:20.123898  388252 fix.go:54] fixHost starting: 
	I1128 04:02:20.124308  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.124355  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.139372  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I1128 04:02:20.139973  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.140502  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.140524  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.141047  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.141273  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.141507  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:02:20.143177  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Running err=<nil>
	W1128 04:02:20.143200  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:02:20.144930  388252 out.go:177] * Updating the running kvm2 "embed-certs-672176" VM ...
	I1128 04:02:17.125019  385277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:17.142364  385277 api_server.go:72] duration metric: took 4m14.849353437s to wait for apiserver process to appear ...
	I1128 04:02:17.142392  385277 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:17.142425  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:17.142480  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:17.183951  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:17.183975  385277 cri.go:89] found id: ""
	I1128 04:02:17.183984  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:17.184035  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.188897  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:17.188968  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:17.224077  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:17.224105  385277 cri.go:89] found id: ""
	I1128 04:02:17.224115  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:17.224171  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.228613  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:17.228693  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:17.263866  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:17.263895  385277 cri.go:89] found id: ""
	I1128 04:02:17.263906  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:17.263973  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.268122  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:17.268187  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:17.311145  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.311176  385277 cri.go:89] found id: ""
	I1128 04:02:17.311185  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:17.311245  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.315277  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:17.315355  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:17.352737  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:17.352763  385277 cri.go:89] found id: ""
	I1128 04:02:17.352773  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:17.352839  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.357033  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:17.357117  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:17.394844  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:17.394880  385277 cri.go:89] found id: ""
	I1128 04:02:17.394892  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:17.394949  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.399309  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:17.399382  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:17.441719  385277 cri.go:89] found id: ""
	I1128 04:02:17.441755  385277 logs.go:284] 0 containers: []
	W1128 04:02:17.441763  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:17.441769  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:17.441821  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:17.485353  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:17.485378  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:17.485383  385277 cri.go:89] found id: ""
	I1128 04:02:17.485391  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:17.485445  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.489781  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.493710  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:17.493734  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:17.552558  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:17.552596  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:17.570454  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:17.570484  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.617817  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:17.617855  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:18.071032  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:18.071076  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:18.188437  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:18.188477  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:18.246729  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:18.246777  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:18.287299  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:18.287345  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:18.324855  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:18.324903  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:18.378328  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:18.378370  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:18.421332  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:18.421375  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:18.467856  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:18.467905  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:18.528763  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:18.528817  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:19.035039  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:21.037085  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:23.535684  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:20.146477  388252 machine.go:88] provisioning docker machine ...
	I1128 04:02:20.146512  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.146758  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.146926  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:02:20.146949  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.147164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:02:20.150346  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.150885  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 04:58:10 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:02:20.150920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.151194  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:02:20.151404  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151768  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:02:20.151998  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:02:20.152482  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:02:20.152501  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:02:23.005224  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:21.087291  385277 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8444/healthz ...
	I1128 04:02:21.094451  385277 api_server.go:279] https://192.168.61.13:8444/healthz returned 200:
	ok
	I1128 04:02:21.096308  385277 api_server.go:141] control plane version: v1.28.4
	I1128 04:02:21.096333  385277 api_server.go:131] duration metric: took 3.953933505s to wait for apiserver health ...
	I1128 04:02:21.096343  385277 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:21.096371  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:21.096431  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:21.144869  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:21.144908  385277 cri.go:89] found id: ""
	I1128 04:02:21.144920  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:21.144987  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.149714  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:21.149790  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:21.192196  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.192230  385277 cri.go:89] found id: ""
	I1128 04:02:21.192242  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:21.192307  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.196964  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:21.197040  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:21.234749  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:21.234775  385277 cri.go:89] found id: ""
	I1128 04:02:21.234785  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:21.234845  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.239486  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:21.239574  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:21.275950  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.275980  385277 cri.go:89] found id: ""
	I1128 04:02:21.275991  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:21.276069  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.280518  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:21.280591  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:21.325941  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:21.325967  385277 cri.go:89] found id: ""
	I1128 04:02:21.325977  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:21.326038  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.330959  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:21.331031  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:21.376605  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.376636  385277 cri.go:89] found id: ""
	I1128 04:02:21.376648  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:21.376717  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.382609  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:21.382686  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:21.434065  385277 cri.go:89] found id: ""
	I1128 04:02:21.434102  385277 logs.go:284] 0 containers: []
	W1128 04:02:21.434113  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:21.434121  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:21.434191  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:21.475230  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.475265  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.475272  385277 cri.go:89] found id: ""
	I1128 04:02:21.475300  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:21.475367  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.479918  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.483989  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:21.484014  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.550040  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:21.550086  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.604802  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:21.604854  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:21.667187  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:21.667230  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:21.735542  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:21.735591  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.778554  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:21.778600  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.841737  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:21.841776  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.885454  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:21.885494  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:22.264498  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:22.264545  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:22.281694  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:22.281727  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:22.441500  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:22.441548  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:22.516971  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:22.517015  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:22.570642  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:22.570676  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:25.123556  385277 system_pods.go:59] 8 kube-system pods found
	I1128 04:02:25.123590  385277 system_pods.go:61] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.123595  385277 system_pods.go:61] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.123600  385277 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.123604  385277 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.123608  385277 system_pods.go:61] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.123613  385277 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.123620  385277 system_pods.go:61] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.123626  385277 system_pods.go:61] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.123633  385277 system_pods.go:74] duration metric: took 4.027284696s to wait for pod list to return data ...
	I1128 04:02:25.123641  385277 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:25.127575  385277 default_sa.go:45] found service account: "default"
	I1128 04:02:25.127601  385277 default_sa.go:55] duration metric: took 3.954108ms for default service account to be created ...
	I1128 04:02:25.127611  385277 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:25.136183  385277 system_pods.go:86] 8 kube-system pods found
	I1128 04:02:25.136217  385277 system_pods.go:89] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.136224  385277 system_pods.go:89] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.136232  385277 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.136240  385277 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.136246  385277 system_pods.go:89] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.136253  385277 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.136266  385277 system_pods.go:89] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.136280  385277 system_pods.go:89] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.136291  385277 system_pods.go:126] duration metric: took 8.673655ms to wait for k8s-apps to be running ...
	I1128 04:02:25.136303  385277 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:25.136362  385277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:25.158811  385277 system_svc.go:56] duration metric: took 22.495299ms WaitForService to wait for kubelet.
	I1128 04:02:25.158862  385277 kubeadm.go:581] duration metric: took 4m22.865858856s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:25.158891  385277 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:25.162679  385277 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:25.162706  385277 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:25.162717  385277 node_conditions.go:105] duration metric: took 3.821419ms to run NodePressure ...
	I1128 04:02:25.162745  385277 start.go:228] waiting for startup goroutines ...
	I1128 04:02:25.162751  385277 start.go:233] waiting for cluster config update ...
	I1128 04:02:25.162760  385277 start.go:242] writing updated cluster config ...
	I1128 04:02:25.163075  385277 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:25.217545  385277 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:02:25.219820  385277 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-725962" cluster and "default" namespace by default
	I1128 04:02:28.624093  385190 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.0
	I1128 04:02:28.624173  385190 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:02:28.624301  385190 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:02:28.624444  385190 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:02:28.624561  385190 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:02:28.624641  385190 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:02:28.626365  385190 out.go:204]   - Generating certificates and keys ...
	I1128 04:02:28.626465  385190 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:02:28.626548  385190 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:02:28.626645  385190 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:02:28.626719  385190 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:02:28.626828  385190 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:02:28.626908  385190 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:02:28.626985  385190 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:02:28.627057  385190 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:02:28.627166  385190 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:02:28.627259  385190 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:02:28.627315  385190 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:02:28.627384  385190 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:02:28.627442  385190 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:02:28.627513  385190 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1128 04:02:28.627573  385190 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:02:28.627653  385190 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:02:28.627717  385190 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:02:28.627821  385190 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:02:28.627901  385190 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:02:28.629387  385190 out.go:204]   - Booting up control plane ...
	I1128 04:02:28.629496  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:02:28.629593  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:02:28.629701  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:02:28.629825  385190 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:02:28.629933  385190 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:02:28.629985  385190 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:02:28.630182  385190 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:02:28.630292  385190 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502940 seconds
	I1128 04:02:28.630437  385190 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:02:28.630586  385190 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:02:28.630656  385190 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:02:28.630869  385190 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-222348 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:02:28.630937  385190 kubeadm.go:322] [bootstrap-token] Using token: 7e8qc3.nnytwd8q8fl84l6i
	I1128 04:02:28.632838  385190 out.go:204]   - Configuring RBAC rules ...
	I1128 04:02:28.632987  385190 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:02:28.633108  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:02:28.633273  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:02:28.633455  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:02:28.633635  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:02:28.633737  385190 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:02:28.633909  385190 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:02:28.633964  385190 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:02:28.634003  385190 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:02:28.634009  385190 kubeadm.go:322] 
	I1128 04:02:28.634063  385190 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:02:28.634070  385190 kubeadm.go:322] 
	I1128 04:02:28.634130  385190 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:02:28.634136  385190 kubeadm.go:322] 
	I1128 04:02:28.634157  385190 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:02:28.634205  385190 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:02:28.634250  385190 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:02:28.634256  385190 kubeadm.go:322] 
	I1128 04:02:28.634333  385190 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:02:28.634349  385190 kubeadm.go:322] 
	I1128 04:02:28.634438  385190 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:02:28.634462  385190 kubeadm.go:322] 
	I1128 04:02:28.634525  385190 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:02:28.634659  385190 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:02:28.634759  385190 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:02:28.634773  385190 kubeadm.go:322] 
	I1128 04:02:28.634879  385190 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:02:28.634957  385190 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:02:28.634965  385190 kubeadm.go:322] 
	I1128 04:02:28.635041  385190 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635153  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:02:28.635188  385190 kubeadm.go:322] 	--control-plane 
	I1128 04:02:28.635197  385190 kubeadm.go:322] 
	I1128 04:02:28.635304  385190 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:02:28.635313  385190 kubeadm.go:322] 
	I1128 04:02:28.635411  385190 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635541  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:02:28.635574  385190 cni.go:84] Creating CNI manager for ""
	I1128 04:02:28.635588  385190 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:28.637435  385190 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:02:28.638928  385190 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:02:25.536491  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:28.037478  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:26.077199  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:28.654704  385190 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:02:28.714435  385190 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:02:28.714516  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.714524  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=no-preload-222348 minikube.k8s.io/updated_at=2023_11_28T04_02_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.790761  385190 ops.go:34] apiserver oom_adj: -16
	I1128 04:02:28.965788  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.082351  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.680586  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.181037  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.680560  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.181252  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.680411  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.180401  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.681195  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:33.180867  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.535026  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.536808  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.161184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:33.680538  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.180615  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.680359  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.180746  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.681099  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.180588  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.681059  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.180397  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.680629  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:38.180710  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.036694  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:37.535611  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:35.229145  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:38.681268  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.180491  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.680634  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.180761  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.681057  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.180983  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.309439  385190 kubeadm.go:1081] duration metric: took 12.594981015s to wait for elevateKubeSystemPrivileges.
	I1128 04:02:41.309479  385190 kubeadm.go:406] StartCluster complete in 5m13.943228432s
	I1128 04:02:41.309503  385190 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.309588  385190 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:41.311897  385190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.312215  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:02:41.312322  385190 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:02:41.312407  385190 addons.go:69] Setting storage-provisioner=true in profile "no-preload-222348"
	I1128 04:02:41.312422  385190 addons.go:69] Setting default-storageclass=true in profile "no-preload-222348"
	I1128 04:02:41.312436  385190 addons.go:231] Setting addon storage-provisioner=true in "no-preload-222348"
	I1128 04:02:41.312438  385190 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-222348"
	W1128 04:02:41.312445  385190 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:02:41.312446  385190 addons.go:69] Setting metrics-server=true in profile "no-preload-222348"
	I1128 04:02:41.312462  385190 config.go:182] Loaded profile config "no-preload-222348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 04:02:41.312475  385190 addons.go:231] Setting addon metrics-server=true in "no-preload-222348"
	W1128 04:02:41.312485  385190 addons.go:240] addon metrics-server should already be in state true
	I1128 04:02:41.312510  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312537  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312960  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312985  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.328695  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I1128 04:02:41.328709  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44013
	I1128 04:02:41.328795  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I1128 04:02:41.332632  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332652  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332640  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.333191  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333213  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333323  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333340  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333358  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333344  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333610  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333774  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333826  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.334168  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334182  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.334399  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.334587  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334602  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.338095  385190 addons.go:231] Setting addon default-storageclass=true in "no-preload-222348"
	W1128 04:02:41.338117  385190 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:02:41.338150  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.338562  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.338582  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.351757  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I1128 04:02:41.352462  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.353001  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.353018  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.353432  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.353689  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.354246  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1128 04:02:41.354837  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.355324  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.355342  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.355772  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.356535  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.356577  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.356832  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33321
	I1128 04:02:41.357390  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.357499  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.359297  385190 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:02:41.357865  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.360511  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.360704  385190 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.360715  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:02:41.360729  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.361075  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.361268  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.363830  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.365783  385190 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:02:41.364607  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.365384  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.367315  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:02:41.367328  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:02:41.367348  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.367398  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.367414  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.367426  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.368068  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.368272  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.370196  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370716  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.370740  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370820  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.371038  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.371144  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.371280  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.374445  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1128 04:02:41.374734  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.375079  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.375089  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.375305  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.375403  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.376672  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.376916  385190 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.376931  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:02:41.376944  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.379448  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379800  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.379839  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379946  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.380070  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.380154  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.380223  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.388696  385190 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-222348" context rescaled to 1 replicas
	I1128 04:02:41.388733  385190 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:02:41.390613  385190 out.go:177] * Verifying Kubernetes components...
	I1128 04:02:41.391975  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:41.644941  385190 node_ready.go:35] waiting up to 6m0s for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.645100  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:02:41.665031  385190 node_ready.go:49] node "no-preload-222348" has status "Ready":"True"
	I1128 04:02:41.665067  385190 node_ready.go:38] duration metric: took 20.088639ms waiting for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.665082  385190 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:41.682673  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:41.759560  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:02:41.759595  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:02:41.905887  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.922496  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.955296  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:02:41.955331  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:02:42.013986  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.014023  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:02:42.104936  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.373507  385190 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 04:02:43.023075  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117131952s)
	I1128 04:02:43.023099  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.100573063s)
	I1128 04:02:43.023137  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023153  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023217  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023235  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023471  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023491  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023502  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023510  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023615  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023659  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023682  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023704  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023724  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023704  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023898  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023917  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.116124  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.116162  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.116591  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.116636  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.116648  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.309617  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.204630924s)
	I1128 04:02:43.309676  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.309689  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310010  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310031  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310043  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.310051  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310313  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310331  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310343  385190 addons.go:467] Verifying addon metrics-server=true in "no-preload-222348"
	I1128 04:02:43.312005  385190 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 04:02:43.313519  385190 addons.go:502] enable addons completed in 2.001198411s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 04:02:39.536572  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:42.036107  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:41.309196  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:44.385117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:43.735794  385190 pod_ready.go:102] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:45.228427  385190 pod_ready.go:92] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.228457  385190 pod_ready.go:81] duration metric: took 3.545740844s waiting for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.228470  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234714  385190 pod_ready.go:92] pod "coredns-76f75df574-nxnkf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.234747  385190 pod_ready.go:81] duration metric: took 6.268663ms waiting for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234767  385190 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240363  385190 pod_ready.go:92] pod "etcd-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.240386  385190 pod_ready.go:81] duration metric: took 5.606452ms waiting for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240397  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245748  385190 pod_ready.go:92] pod "kube-apiserver-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.245774  385190 pod_ready.go:81] duration metric: took 5.367922ms waiting for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245786  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251475  385190 pod_ready.go:92] pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.251498  385190 pod_ready.go:81] duration metric: took 5.703821ms waiting for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251506  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050247  385190 pod_ready.go:92] pod "kube-proxy-2cf7h" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.050276  385190 pod_ready.go:81] duration metric: took 798.763018ms waiting for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050285  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448834  385190 pod_ready.go:92] pod "kube-scheduler-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.448860  385190 pod_ready.go:81] duration metric: took 398.568611ms waiting for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448867  385190 pod_ready.go:38] duration metric: took 4.783773086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:46.448903  385190 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:02:46.448956  385190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:46.462941  385190 api_server.go:72] duration metric: took 5.074163925s to wait for apiserver process to appear ...
	I1128 04:02:46.463051  385190 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:46.463074  385190 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I1128 04:02:46.467657  385190 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I1128 04:02:46.468866  385190 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 04:02:46.468903  385190 api_server.go:131] duration metric: took 5.843376ms to wait for apiserver health ...
	I1128 04:02:46.468913  385190 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:46.655554  385190 system_pods.go:59] 9 kube-system pods found
	I1128 04:02:46.655587  385190 system_pods.go:61] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:46.655591  385190 system_pods.go:61] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:46.655595  385190 system_pods.go:61] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:46.655600  385190 system_pods.go:61] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:46.655605  385190 system_pods.go:61] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:46.655608  385190 system_pods.go:61] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:46.655612  385190 system_pods.go:61] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:46.655619  385190 system_pods.go:61] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:46.655623  385190 system_pods.go:61] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:46.655631  385190 system_pods.go:74] duration metric: took 186.709524ms to wait for pod list to return data ...
	I1128 04:02:46.655640  385190 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:46.849175  385190 default_sa.go:45] found service account: "default"
	I1128 04:02:46.849211  385190 default_sa.go:55] duration metric: took 193.561736ms for default service account to be created ...
	I1128 04:02:46.849224  385190 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:47.053165  385190 system_pods.go:86] 9 kube-system pods found
	I1128 04:02:47.053196  385190 system_pods.go:89] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:47.053202  385190 system_pods.go:89] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:47.053206  385190 system_pods.go:89] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:47.053210  385190 system_pods.go:89] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:47.053215  385190 system_pods.go:89] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:47.053219  385190 system_pods.go:89] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:47.053223  385190 system_pods.go:89] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:47.053230  385190 system_pods.go:89] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:47.053234  385190 system_pods.go:89] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:47.053244  385190 system_pods.go:126] duration metric: took 204.014035ms to wait for k8s-apps to be running ...
	I1128 04:02:47.053258  385190 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:47.053305  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:47.067411  385190 system_svc.go:56] duration metric: took 14.14274ms WaitForService to wait for kubelet.
	I1128 04:02:47.067436  385190 kubeadm.go:581] duration metric: took 5.678670521s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:47.067453  385190 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:47.249281  385190 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:47.249314  385190 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:47.249327  385190 node_conditions.go:105] duration metric: took 181.869484ms to run NodePressure ...
	I1128 04:02:47.249343  385190 start.go:228] waiting for startup goroutines ...
	I1128 04:02:47.249351  385190 start.go:233] waiting for cluster config update ...
	I1128 04:02:47.249363  385190 start.go:242] writing updated cluster config ...
	I1128 04:02:47.249683  385190 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:47.301859  385190 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.0 (minor skew: 1)
	I1128 04:02:47.304215  385190 out.go:177] * Done! kubectl is now configured to use "no-preload-222348" cluster and "default" namespace by default
	I1128 04:02:44.036258  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:46.535320  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:49.035723  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:51.036414  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.538606  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.501130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:56.038018  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:58.038148  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:56.573082  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:00.535454  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.536429  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.657139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:05.035677  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:07.535352  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:05.725166  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:10.035343  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:11.229133  384793 pod_ready.go:81] duration metric: took 4m0.000747713s waiting for pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:11.229186  384793 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 04:03:11.229223  384793 pod_ready.go:38] duration metric: took 4m1.198355321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:11.229295  384793 kubeadm.go:640] restartCluster took 5m7.227749733s
	W1128 04:03:11.229381  384793 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 04:03:11.229418  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 04:03:11.809110  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:14.877214  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:17.718633  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.489183339s)
	I1128 04:03:17.718715  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:17.739229  384793 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:03:17.757193  384793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:03:17.767831  384793 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:03:17.767891  384793 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1128 04:03:17.992007  384793 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:03:20.961191  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:24.033147  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:31.044187  384793 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1128 04:03:31.044276  384793 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:03:31.044375  384793 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:03:31.044493  384793 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:03:31.044609  384793 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:03:31.044732  384793 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:03:31.044843  384793 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:03:31.044947  384793 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1128 04:03:31.045000  384793 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:03:31.046699  384793 out.go:204]   - Generating certificates and keys ...
	I1128 04:03:31.046809  384793 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:03:31.046903  384793 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:03:31.047016  384793 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:03:31.047101  384793 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:03:31.047160  384793 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:03:31.047208  384793 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:03:31.047264  384793 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:03:31.047314  384793 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:03:31.047377  384793 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:03:31.047482  384793 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:03:31.047529  384793 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:03:31.047578  384793 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:03:31.047620  384793 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:03:31.047694  384793 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:03:31.047788  384793 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:03:31.047884  384793 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:03:31.047988  384793 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:03:31.049345  384793 out.go:204]   - Booting up control plane ...
	I1128 04:03:31.049473  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:03:31.049569  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:03:31.049662  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:03:31.049788  384793 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:03:31.049994  384793 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:03:31.050107  384793 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503287 seconds
	I1128 04:03:31.050234  384793 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:03:31.050420  384793 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:03:31.050527  384793 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:03:31.050654  384793 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-666657 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1128 04:03:31.050713  384793 kubeadm.go:322] [bootstrap-token] Using token: gf7r1p.pbcguwte29lkqg9w
	I1128 04:03:31.052000  384793 out.go:204]   - Configuring RBAC rules ...
	I1128 04:03:31.052092  384793 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:03:31.052210  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:03:31.052320  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:03:31.052413  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:03:31.052483  384793 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:03:31.052536  384793 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:03:31.052597  384793 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:03:31.052606  384793 kubeadm.go:322] 
	I1128 04:03:31.052674  384793 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:03:31.052686  384793 kubeadm.go:322] 
	I1128 04:03:31.052781  384793 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:03:31.052797  384793 kubeadm.go:322] 
	I1128 04:03:31.052818  384793 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:03:31.052928  384793 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:03:31.052973  384793 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:03:31.052982  384793 kubeadm.go:322] 
	I1128 04:03:31.053023  384793 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:03:31.053088  384793 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:03:31.053143  384793 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:03:31.053150  384793 kubeadm.go:322] 
	I1128 04:03:31.053220  384793 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1128 04:03:31.053286  384793 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:03:31.053292  384793 kubeadm.go:322] 
	I1128 04:03:31.053381  384793 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053534  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:03:31.053573  384793 kubeadm.go:322]     --control-plane 	  
	I1128 04:03:31.053582  384793 kubeadm.go:322] 
	I1128 04:03:31.053693  384793 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:03:31.053705  384793 kubeadm.go:322] 
	I1128 04:03:31.053806  384793 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053946  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:03:31.053966  384793 cni.go:84] Creating CNI manager for ""
	I1128 04:03:31.053976  384793 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:03:31.055505  384793 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:03:31.057142  384793 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:03:31.079411  384793 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:03:31.115893  384793 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:03:31.115971  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.115980  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=old-k8s-version-666657 minikube.k8s.io/updated_at=2023_11_28T04_03_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.155887  384793 ops.go:34] apiserver oom_adj: -16
	I1128 04:03:31.372659  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.491129  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.099198  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.598840  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.099309  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.599526  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:30.109176  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:33.181170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:34.099192  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:34.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.098837  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.599080  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.098595  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.599209  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.099078  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.599225  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.099115  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.599148  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.261149  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:39.099036  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.599363  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.099099  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.598700  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.099170  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.599370  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.099044  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.098743  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.599233  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.333168  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:44.099079  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:44.598797  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.098959  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.598648  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.098995  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.301569  384793 kubeadm.go:1081] duration metric: took 15.185662789s to wait for elevateKubeSystemPrivileges.
	I1128 04:03:46.301619  384793 kubeadm.go:406] StartCluster complete in 5m42.369662329s
	I1128 04:03:46.301646  384793 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.301755  384793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:03:46.304463  384793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.304778  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:03:46.304778  384793 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:03:46.304867  384793 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304898  384793 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304910  384793 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-666657"
	I1128 04:03:46.304911  384793 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-666657"
	W1128 04:03:46.304920  384793 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:03:46.304927  384793 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-666657"
	W1128 04:03:46.304935  384793 addons.go:240] addon metrics-server should already be in state true
	I1128 04:03:46.304934  384793 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-666657"
	I1128 04:03:46.304987  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.304988  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.305001  384793 config.go:182] Loaded profile config "old-k8s-version-666657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 04:03:46.305394  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305427  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305454  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305429  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305395  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305694  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.322961  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33891
	I1128 04:03:46.322979  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I1128 04:03:46.323376  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323388  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323820  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I1128 04:03:46.323904  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.323916  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324071  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324086  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324273  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.324410  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324528  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324590  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.324704  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324711  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.325059  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.325278  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325304  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.325499  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325519  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.328349  384793 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-666657"
	W1128 04:03:46.328365  384793 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:03:46.328393  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.328731  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.328750  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.342280  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45973
	I1128 04:03:46.343025  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.343737  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.343759  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.344269  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.344492  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.345036  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39033
	I1128 04:03:46.345665  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.346273  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.346301  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.346384  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.348493  384793 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:03:46.346866  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.349948  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:03:46.349966  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:03:46.349989  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.350099  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.352330  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.352432  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36429
	I1128 04:03:46.354071  384793 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:03:46.352959  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.354459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355328  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.355358  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355480  384793 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.355501  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:03:46.355518  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.355216  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.355803  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.356414  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.356435  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.356917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.357018  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.357108  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.357738  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.357769  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.358467  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.358922  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.358946  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.359072  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.359282  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.359403  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.359610  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.373628  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1128 04:03:46.374105  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.374866  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.374895  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.375314  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.375548  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.377265  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.377561  384793 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.377582  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:03:46.377603  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.380459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.380834  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.380864  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.381016  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.381169  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.381359  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.381466  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.409792  384793 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-666657" context rescaled to 1 replicas
	I1128 04:03:46.409842  384793 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:03:46.411454  384793 out.go:177] * Verifying Kubernetes components...
	I1128 04:03:46.413194  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:46.586767  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.631269  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.634383  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:03:46.634407  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:03:46.666152  384793 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.666176  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:03:46.674225  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:03:46.674248  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:03:46.713431  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:46.713461  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:03:46.793657  384793 node_ready.go:49] node "old-k8s-version-666657" has status "Ready":"True"
	I1128 04:03:46.793685  384793 node_ready.go:38] duration metric: took 127.497314ms waiting for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.793695  384793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:46.793699  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:47.263395  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:47.404099  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404139  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404445  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404485  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.404487  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:47.404506  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404519  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404786  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404809  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434537  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.434567  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.434929  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.434986  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434965  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.447368  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.816042626s)
	I1128 04:03:48.447386  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.781104735s)
	I1128 04:03:48.447415  384793 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1128 04:03:48.447423  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.447803  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.447818  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.447828  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447836  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.448143  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.448144  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.448166  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.746828  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.953085214s)
	I1128 04:03:48.746898  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.746917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747352  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.747378  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.747396  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.747420  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.747437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747692  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.749007  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.749027  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.749045  384793 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-666657"
	I1128 04:03:48.750820  384793 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 04:03:48.417150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:48.752378  384793 addons.go:502] enable addons completed in 2.447603022s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 04:03:49.504435  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.973968  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.485111  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:53.973462  384793 pod_ready.go:92] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.973491  384793 pod_ready.go:81] duration metric: took 6.710064476s waiting for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.973504  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.975383  384793 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975413  384793 pod_ready.go:81] duration metric: took 1.901164ms waiting for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:53.975426  384793 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975437  384793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980213  384793 pod_ready.go:92] pod "kube-proxy-fpjnf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.980239  384793 pod_ready.go:81] duration metric: took 4.79365ms waiting for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980249  384793 pod_ready.go:38] duration metric: took 7.186544585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:53.980270  384793 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:03:53.980322  384793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:03:53.995392  384793 api_server.go:72] duration metric: took 7.585507425s to wait for apiserver process to appear ...
	I1128 04:03:53.995438  384793 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:03:53.995455  384793 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I1128 04:03:54.002840  384793 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I1128 04:03:54.003953  384793 api_server.go:141] control plane version: v1.16.0
	I1128 04:03:54.003972  384793 api_server.go:131] duration metric: took 8.527968ms to wait for apiserver health ...
	I1128 04:03:54.003980  384793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:03:54.008155  384793 system_pods.go:59] 4 kube-system pods found
	I1128 04:03:54.008179  384793 system_pods.go:61] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.008184  384793 system_pods.go:61] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.008192  384793 system_pods.go:61] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.008196  384793 system_pods.go:61] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.008202  384793 system_pods.go:74] duration metric: took 4.21636ms to wait for pod list to return data ...
	I1128 04:03:54.008209  384793 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:03:54.010577  384793 default_sa.go:45] found service account: "default"
	I1128 04:03:54.010597  384793 default_sa.go:55] duration metric: took 2.383201ms for default service account to be created ...
	I1128 04:03:54.010603  384793 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:03:54.014085  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.014107  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.014114  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.014121  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.014125  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.014142  384793 retry.go:31] will retry after 305.81254ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.325645  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.325690  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.325700  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.325711  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.325717  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.325737  384793 retry.go:31] will retry after 265.004483ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.596427  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.596465  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.596472  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.596483  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.596491  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.596515  384793 retry.go:31] will retry after 379.763313ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.981569  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.981599  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.981607  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.981617  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.981624  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.981646  384793 retry.go:31] will retry after 439.396023ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.426531  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.426560  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.426565  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.426572  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.426577  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.426593  384793 retry.go:31] will retry after 551.563469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.983013  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.983042  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.983048  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.983055  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.983060  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.983076  384793 retry.go:31] will retry after 647.414701ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:56.635207  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:56.635238  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:56.635243  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:56.635251  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:56.635256  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:56.635276  384793 retry.go:31] will retry after 1.037316769s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.678748  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:57.678791  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:57.678800  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:57.678810  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:57.678815  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:57.678836  384793 retry.go:31] will retry after 1.167348672s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.565155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:58.851584  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:58.851615  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:58.851621  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:58.851627  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:58.851632  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:58.851649  384793 retry.go:31] will retry after 1.37796567s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.235244  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:00.235270  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:00.235276  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:00.235282  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:00.235288  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:00.235313  384793 retry.go:31] will retry after 2.090359712s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:02.330947  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:02.330984  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:02.331002  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:02.331013  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:02.331020  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:02.331041  384793 retry.go:31] will retry after 2.451255186s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.637193  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:04.787969  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:04.787999  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:04.788004  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:04.788011  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:04.788016  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:04.788033  384793 retry.go:31] will retry after 2.859833817s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:07.653629  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:07.653661  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:07.653667  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:07.653674  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:07.653679  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:07.653697  384793 retry.go:31] will retry after 4.226694897s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:06.721130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:09.789162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:11.886456  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:11.886488  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:11.886496  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:11.886503  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:11.886508  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:11.886538  384793 retry.go:31] will retry after 4.177038986s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:16.069291  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:16.069324  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:16.069330  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:16.069336  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:16.069341  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:16.069359  384793 retry.go:31] will retry after 4.273733761s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:15.869195  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:18.945228  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:20.347960  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:20.347992  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:20.347998  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:20.348004  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:20.348009  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:20.348028  384793 retry.go:31] will retry after 6.790786839s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:27.147442  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:27.147481  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:27.147489  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:27.147496  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Pending
	I1128 04:04:27.147506  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:27.147513  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:27.147532  384793 retry.go:31] will retry after 7.530763623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:25.021154  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:28.093157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.177177  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.684745  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:34.684783  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:34.684792  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:34.684799  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:34.684807  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:34.684813  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:34.684835  384793 retry.go:31] will retry after 10.243202989s: missing components: etcd, kube-apiserver, kube-controller-manager
	I1128 04:04:37.245170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:43.325131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:44.935423  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:04:44.935456  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:44.935462  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:04:44.935469  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Pending
	I1128 04:04:44.935474  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Pending
	I1128 04:04:44.935480  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:44.935486  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:44.935493  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:44.935498  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:44.935517  384793 retry.go:31] will retry after 15.895769684s: missing components: kube-apiserver, kube-controller-manager
	I1128 04:04:46.397235  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:52.481117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:55.549226  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:00.839171  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:05:00.839203  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:05:00.839209  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:05:00.839213  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Running
	I1128 04:05:00.839217  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Running
	I1128 04:05:00.839221  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:05:00.839225  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:05:00.839231  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:05:00.839236  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:05:00.839245  384793 system_pods.go:126] duration metric: took 1m6.828635432s to wait for k8s-apps to be running ...
	I1128 04:05:00.839253  384793 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:05:00.839308  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:05:00.858602  384793 system_svc.go:56] duration metric: took 19.336447ms WaitForService to wait for kubelet.
	I1128 04:05:00.858640  384793 kubeadm.go:581] duration metric: took 1m14.448764188s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:05:00.858663  384793 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:05:00.862657  384793 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:05:00.862682  384793 node_conditions.go:123] node cpu capacity is 2
	I1128 04:05:00.862695  384793 node_conditions.go:105] duration metric: took 4.026622ms to run NodePressure ...
	I1128 04:05:00.862709  384793 start.go:228] waiting for startup goroutines ...
	I1128 04:05:00.862721  384793 start.go:233] waiting for cluster config update ...
	I1128 04:05:00.862736  384793 start.go:242] writing updated cluster config ...
	I1128 04:05:00.863037  384793 ssh_runner.go:195] Run: rm -f paused
	I1128 04:05:00.914674  384793 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1128 04:05:00.916795  384793 out.go:177] 
	W1128 04:05:00.918292  384793 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1128 04:05:00.919711  384793 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1128 04:05:00.921263  384793 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-666657" cluster and "default" namespace by default
	I1128 04:05:01.629125  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:04.701205  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:10.781216  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:13.853213  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:19.933127  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:23.005456  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:29.085157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:32.161103  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:38.237107  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:41.313150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:47.389244  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:50.461131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:56.541162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:59.613200  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:05.693144  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:08.765184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:14.845161  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:17.921139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:23.997190  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:27.069225  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:33.149188  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:36.221163  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:42.301167  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:45.373156  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:51.453155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:54.525189  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:57.526358  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:06:57.526408  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:06:57.528448  388252 machine.go:91] provisioned docker machine in 4m37.381939051s
	I1128 04:06:57.528492  388252 fix.go:56] fixHost completed within 4m37.404595738s
	I1128 04:06:57.528498  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 4m37.404645524s
	W1128 04:06:57.528514  388252 start.go:691] error starting host: provision: host is not running
	W1128 04:06:57.528751  388252 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1128 04:06:57.528762  388252 start.go:706] Will try again in 5 seconds ...
	I1128 04:07:02.528995  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:07:02.529144  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 79.815µs
	I1128 04:07:02.529172  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:07:02.529180  388252 fix.go:54] fixHost starting: 
	I1128 04:07:02.529654  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:07:02.529689  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:07:02.545443  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I1128 04:07:02.546041  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:07:02.546627  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:07:02.546657  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:07:02.547002  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:07:02.547202  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:02.547393  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:07:02.549209  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Stopped err=<nil>
	I1128 04:07:02.549234  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	W1128 04:07:02.549378  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:07:02.551250  388252 out.go:177] * Restarting existing kvm2 VM for "embed-certs-672176" ...
	I1128 04:07:02.552611  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Start
	I1128 04:07:02.552792  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring networks are active...
	I1128 04:07:02.553615  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network default is active
	I1128 04:07:02.553928  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network mk-embed-certs-672176 is active
	I1128 04:07:02.554371  388252 main.go:141] libmachine: (embed-certs-672176) Getting domain xml...
	I1128 04:07:02.555218  388252 main.go:141] libmachine: (embed-certs-672176) Creating domain...
	I1128 04:07:03.867073  388252 main.go:141] libmachine: (embed-certs-672176) Waiting to get IP...
	I1128 04:07:03.868115  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:03.868595  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:03.868706  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:03.868567  389161 retry.go:31] will retry after 306.367802ms: waiting for machine to come up
	I1128 04:07:04.176148  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.176727  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.176760  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.176665  389161 retry.go:31] will retry after 349.820346ms: waiting for machine to come up
	I1128 04:07:04.528319  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.528804  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.528830  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.528753  389161 retry.go:31] will retry after 434.816613ms: waiting for machine to come up
	I1128 04:07:04.965453  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.965931  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.965964  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.965859  389161 retry.go:31] will retry after 504.812349ms: waiting for machine to come up
	I1128 04:07:05.472644  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.473150  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.473181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.473089  389161 retry.go:31] will retry after 512.859795ms: waiting for machine to come up
	I1128 04:07:05.987622  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.988077  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.988101  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.988023  389161 retry.go:31] will retry after 578.673806ms: waiting for machine to come up
	I1128 04:07:06.568420  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:06.568923  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:06.568957  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:06.568863  389161 retry.go:31] will retry after 1.101477644s: waiting for machine to come up
	I1128 04:07:07.671698  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:07.672126  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:07.672156  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:07.672054  389161 retry.go:31] will retry after 1.379684082s: waiting for machine to come up
	I1128 04:07:09.053227  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:09.053918  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:09.053950  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:09.053851  389161 retry.go:31] will retry after 1.775284772s: waiting for machine to come up
	I1128 04:07:10.831571  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:10.832140  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:10.832177  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:10.832065  389161 retry.go:31] will retry after 2.005203426s: waiting for machine to come up
	I1128 04:07:12.838667  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:12.839159  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:12.839187  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:12.839113  389161 retry.go:31] will retry after 2.403192486s: waiting for machine to come up
	I1128 04:07:15.244005  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:15.244513  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:15.244553  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:15.244427  389161 retry.go:31] will retry after 2.329820043s: waiting for machine to come up
	I1128 04:07:17.576268  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:17.576707  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:17.576748  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:17.576652  389161 retry.go:31] will retry after 4.220303586s: waiting for machine to come up
	I1128 04:07:21.801976  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802441  388252 main.go:141] libmachine: (embed-certs-672176) Found IP for machine: 192.168.72.208
	I1128 04:07:21.802469  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has current primary IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802483  388252 main.go:141] libmachine: (embed-certs-672176) Reserving static IP address...
	I1128 04:07:21.802890  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.802920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | skip adding static IP to network mk-embed-certs-672176 - found existing host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"}
	I1128 04:07:21.802939  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Getting to WaitForSSH function...
	I1128 04:07:21.802955  388252 main.go:141] libmachine: (embed-certs-672176) Reserved static IP address: 192.168.72.208
	I1128 04:07:21.802967  388252 main.go:141] libmachine: (embed-certs-672176) Waiting for SSH to be available...
	I1128 04:07:21.805675  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806052  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.806086  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806212  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH client type: external
	I1128 04:07:21.806237  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH private key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa (-rw-------)
	I1128 04:07:21.806261  388252 main.go:141] libmachine: (embed-certs-672176) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 04:07:21.806272  388252 main.go:141] libmachine: (embed-certs-672176) DBG | About to run SSH command:
	I1128 04:07:21.806284  388252 main.go:141] libmachine: (embed-certs-672176) DBG | exit 0
	I1128 04:07:21.897047  388252 main.go:141] libmachine: (embed-certs-672176) DBG | SSH cmd err, output: <nil>: 
	I1128 04:07:21.897443  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetConfigRaw
	I1128 04:07:21.898164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:21.901014  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901421  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.901454  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901679  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:07:21.901872  388252 machine.go:88] provisioning docker machine ...
	I1128 04:07:21.901891  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:21.902121  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902304  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:07:21.902318  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902482  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:21.905282  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905757  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.905798  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905977  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:21.906187  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906383  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906565  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:21.906734  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:21.907224  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:21.907254  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:07:22.042525  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-672176
	
	I1128 04:07:22.042553  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.045516  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.045916  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.045961  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.046143  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.046353  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046526  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046676  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.046861  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.047186  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.047207  388252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-672176' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-672176/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-672176' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:07:22.179515  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:07:22.179552  388252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 04:07:22.179578  388252 buildroot.go:174] setting up certificates
	I1128 04:07:22.179591  388252 provision.go:83] configureAuth start
	I1128 04:07:22.179602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:22.179940  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:22.182782  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183167  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.183199  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183344  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.185770  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186158  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.186195  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186348  388252 provision.go:138] copyHostCerts
	I1128 04:07:22.186407  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 04:07:22.186418  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 04:07:22.186494  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 04:07:22.186609  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 04:07:22.186623  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 04:07:22.186658  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 04:07:22.186756  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 04:07:22.186772  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 04:07:22.186830  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 04:07:22.186915  388252 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.embed-certs-672176 san=[192.168.72.208 192.168.72.208 localhost 127.0.0.1 minikube embed-certs-672176]
	I1128 04:07:22.268178  388252 provision.go:172] copyRemoteCerts
	I1128 04:07:22.268250  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:07:22.268305  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.270816  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271152  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.271181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271382  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.271571  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.271730  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.271880  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.362340  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 04:07:22.387591  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 04:07:22.412169  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 04:07:22.437185  388252 provision.go:86] duration metric: configureAuth took 257.574597ms
	I1128 04:07:22.437223  388252 buildroot.go:189] setting minikube options for container-runtime
	I1128 04:07:22.437418  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:07:22.437496  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.440503  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.440937  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.440984  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.441148  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.441414  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441626  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441808  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.442043  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.442369  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.442386  388252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:07:22.778314  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:07:22.778344  388252 machine.go:91] provisioned docker machine in 876.457785ms
	I1128 04:07:22.778392  388252 start.go:300] post-start starting for "embed-certs-672176" (driver="kvm2")
	I1128 04:07:22.778413  388252 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:07:22.778463  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:22.778894  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:07:22.778934  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.781750  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782161  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.782203  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782336  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.782653  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.782870  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.783045  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.876530  388252 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:07:22.881442  388252 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 04:07:22.881472  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 04:07:22.881541  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 04:07:22.881618  388252 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 04:07:22.881701  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:07:22.891393  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:22.914734  388252 start.go:303] post-start completed in 136.316733ms
	I1128 04:07:22.914771  388252 fix.go:56] fixHost completed within 20.385588986s
	I1128 04:07:22.914800  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.917856  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918267  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.918301  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918449  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.918697  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.918898  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.919051  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.919230  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.919548  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.919561  388252 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 04:07:23.037790  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701144442.982632661
	
	I1128 04:07:23.037817  388252 fix.go:206] guest clock: 1701144442.982632661
	I1128 04:07:23.037828  388252 fix.go:219] Guest: 2023-11-28 04:07:22.982632661 +0000 UTC Remote: 2023-11-28 04:07:22.914776935 +0000 UTC m=+302.972189005 (delta=67.855726ms)
	I1128 04:07:23.037853  388252 fix.go:190] guest clock delta is within tolerance: 67.855726ms
	I1128 04:07:23.037860  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 20.508701455s
	I1128 04:07:23.037879  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.038196  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:23.040928  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041276  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.041309  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041473  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042009  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042217  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042315  388252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:07:23.042380  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.042447  388252 ssh_runner.go:195] Run: cat /version.json
	I1128 04:07:23.042479  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.045070  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045430  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.045459  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045478  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045634  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.045826  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.045987  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.045998  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.046020  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.046131  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.046197  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.046338  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.046455  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.046594  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.158653  388252 ssh_runner.go:195] Run: systemctl --version
	I1128 04:07:23.164496  388252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:07:23.313946  388252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 04:07:23.320220  388252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 04:07:23.320326  388252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:07:23.339262  388252 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 04:07:23.339296  388252 start.go:472] detecting cgroup driver to use...
	I1128 04:07:23.339401  388252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:07:23.352989  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:07:23.367735  388252 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:07:23.367797  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:07:23.382143  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:07:23.395983  388252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 04:07:23.513475  388252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:07:23.657449  388252 docker.go:219] disabling docker service ...
	I1128 04:07:23.657531  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:07:23.672662  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:07:23.685142  388252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:07:23.810404  388252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:07:23.929413  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:07:23.942971  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:07:23.961419  388252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 04:07:23.961493  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.971562  388252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 04:07:23.971643  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.981660  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.992472  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:24.002748  388252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 04:07:24.016234  388252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 04:07:24.025560  388252 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 04:07:24.025629  388252 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 04:07:24.039085  388252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 04:07:24.048324  388252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 04:07:24.160507  388252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 04:07:24.331205  388252 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 04:07:24.331292  388252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 04:07:24.336480  388252 start.go:540] Will wait 60s for crictl version
	I1128 04:07:24.336541  388252 ssh_runner.go:195] Run: which crictl
	I1128 04:07:24.341052  388252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 04:07:24.376784  388252 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 04:07:24.376910  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.425035  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.485230  388252 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 04:07:24.486822  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:24.490127  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490529  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:24.490558  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490733  388252 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1128 04:07:24.494881  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:24.510006  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:07:24.510097  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:24.549615  388252 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 04:07:24.549699  388252 ssh_runner.go:195] Run: which lz4
	I1128 04:07:24.554039  388252 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 04:07:24.558068  388252 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 04:07:24.558101  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 04:07:26.358503  388252 crio.go:444] Took 1.804493 seconds to copy over tarball
	I1128 04:07:26.358586  388252 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 04:07:29.679041  388252 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.320417818s)
	I1128 04:07:29.679072  388252 crio.go:451] Took 3.320535 seconds to extract the tarball
	I1128 04:07:29.679086  388252 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 04:07:29.723905  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:29.774544  388252 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 04:07:29.774574  388252 cache_images.go:84] Images are preloaded, skipping loading
	I1128 04:07:29.774683  388252 ssh_runner.go:195] Run: crio config
	I1128 04:07:29.841740  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:29.841767  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:29.841792  388252 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 04:07:29.841826  388252 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-672176 NodeName:embed-certs-672176 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 04:07:29.842004  388252 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-672176"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 04:07:29.842115  388252 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-672176 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 04:07:29.842184  388252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 04:07:29.854017  388252 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 04:07:29.854103  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 04:07:29.863871  388252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1128 04:07:29.880656  388252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 04:07:29.899138  388252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1128 04:07:29.919697  388252 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I1128 04:07:29.924087  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:29.936814  388252 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176 for IP: 192.168.72.208
	I1128 04:07:29.936851  388252 certs.go:190] acquiring lock for shared ca certs: {Name:mk57c0483467fb0022a439f1b546194ca653d1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:07:29.937053  388252 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key
	I1128 04:07:29.937097  388252 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key
	I1128 04:07:29.937198  388252 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/client.key
	I1128 04:07:29.937274  388252 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key.9e96c9f0
	I1128 04:07:29.937334  388252 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key
	I1128 04:07:29.937491  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem (1338 bytes)
	W1128 04:07:29.937524  388252 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515_empty.pem, impossibly tiny 0 bytes
	I1128 04:07:29.937535  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 04:07:29.937561  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem (1078 bytes)
	I1128 04:07:29.937586  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem (1123 bytes)
	I1128 04:07:29.937607  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem (1675 bytes)
	I1128 04:07:29.937698  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:29.938553  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 04:07:29.963444  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 04:07:29.988035  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 04:07:30.012981  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 04:07:30.219926  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 04:07:30.244077  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 04:07:30.268833  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 04:07:30.293921  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 04:07:30.322839  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /usr/share/ca-certificates/3405152.pem (1708 bytes)
	I1128 04:07:30.349783  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 04:07:30.374569  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem --> /usr/share/ca-certificates/340515.pem (1338 bytes)
	I1128 04:07:30.401804  388252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 04:07:30.420925  388252 ssh_runner.go:195] Run: openssl version
	I1128 04:07:30.427193  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3405152.pem && ln -fs /usr/share/ca-certificates/3405152.pem /etc/ssl/certs/3405152.pem"
	I1128 04:07:30.439369  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444359  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444455  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.451032  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3405152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 04:07:30.464110  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 04:07:30.477275  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483239  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483314  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.489884  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 04:07:30.501967  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340515.pem && ln -fs /usr/share/ca-certificates/340515.pem /etc/ssl/certs/340515.pem"
	I1128 04:07:30.514081  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519079  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519157  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.525194  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340515.pem /etc/ssl/certs/51391683.0"
	I1128 04:07:30.536594  388252 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 04:07:30.541041  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 04:07:30.547008  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 04:07:30.554317  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 04:07:30.561063  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 04:07:30.567355  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 04:07:30.573719  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 04:07:30.580010  388252 kubeadm.go:404] StartCluster: {Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:07:30.580166  388252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 04:07:30.580237  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:30.623908  388252 cri.go:89] found id: ""
	I1128 04:07:30.623980  388252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 04:07:30.635847  388252 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 04:07:30.635911  388252 kubeadm.go:636] restartCluster start
	I1128 04:07:30.635982  388252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 04:07:30.646523  388252 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.647648  388252 kubeconfig.go:92] found "embed-certs-672176" server: "https://192.168.72.208:8443"
	I1128 04:07:30.650037  388252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 04:07:30.660625  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.660703  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.674234  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.674258  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.674309  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.687276  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.188012  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.188122  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.201481  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.688057  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.688152  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.701564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.188188  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.201049  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.688113  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.688191  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.700824  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.187399  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.187517  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.200128  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.687562  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.687688  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.700564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.188276  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.188406  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.201686  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.688327  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.688426  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.701023  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.187672  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.187809  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.200598  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.688485  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.688565  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.701518  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.188131  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.188213  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.201708  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.688321  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.688430  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.701852  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.187395  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.187539  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.200267  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.688365  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.688447  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.701921  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.187456  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.187615  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.201388  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.687819  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.687933  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.700584  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.188195  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.201557  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.688192  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.688268  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.700990  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.187806  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:40.187918  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:40.201110  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.660853  388252 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 04:07:40.660908  388252 kubeadm.go:1128] stopping kube-system containers ...
	I1128 04:07:40.660926  388252 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 04:07:40.661008  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:40.706945  388252 cri.go:89] found id: ""
	I1128 04:07:40.707017  388252 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 04:07:40.724988  388252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:07:40.735077  388252 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:07:40.735165  388252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745110  388252 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745146  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:40.870777  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:41.851187  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.047008  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.129329  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.194986  388252 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:07:42.195074  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.210225  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.727622  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.227063  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.726928  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.227709  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.727790  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.756952  388252 api_server.go:72] duration metric: took 2.561964065s to wait for apiserver process to appear ...
	I1128 04:07:44.756989  388252 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:07:44.757011  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.757778  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:44.757838  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.758268  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:45.258785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.416741  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.416771  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.416785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.484252  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.484292  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.758607  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.765159  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:49.765189  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.258770  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.264464  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.264499  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.759164  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.765206  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.765246  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:51.258591  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:51.264758  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I1128 04:07:51.274077  388252 api_server.go:141] control plane version: v1.28.4
	I1128 04:07:51.274110  388252 api_server.go:131] duration metric: took 6.517112692s to wait for apiserver health ...
	I1128 04:07:51.274122  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:51.274130  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:51.276088  388252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:07:51.277582  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:07:51.302050  388252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:07:51.355400  388252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:07:51.371543  388252 system_pods.go:59] 8 kube-system pods found
	I1128 04:07:51.371592  388252 system_pods.go:61] "coredns-5dd5756b68-296l9" [a79e060e-b757-46b9-882e-5f065aed0f46] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 04:07:51.371605  388252 system_pods.go:61] "etcd-embed-certs-672176" [610938df-5b75-4fef-b632-19af73d74dab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 04:07:51.371623  388252 system_pods.go:61] "kube-apiserver-embed-certs-672176" [3e513b84-29f4-4285-aea3-963078fa9e74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 04:07:51.371633  388252 system_pods.go:61] "kube-controller-manager-embed-certs-672176" [6fb9a912-0c05-47d1-8420-26d0bbbe92c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 04:07:51.371640  388252 system_pods.go:61] "kube-proxy-4cvwh" [9882c0aa-5c66-4b53-8c8e-827c1cddaac5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 04:07:51.371652  388252 system_pods.go:61] "kube-scheduler-embed-certs-672176" [2d7c706d-f01b-4e80-ba35-8ef97f27faa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 04:07:51.371659  388252 system_pods.go:61] "metrics-server-57f55c9bc5-sbkpc" [ea558db5-2aab-4e1e-aa62-a4595172d108] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:07:51.371666  388252 system_pods.go:61] "storage-provisioner" [96737dd7-931e-4ac5-b662-c560a4b6642e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 04:07:51.371676  388252 system_pods.go:74] duration metric: took 16.247766ms to wait for pod list to return data ...
	I1128 04:07:51.371694  388252 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:07:51.376458  388252 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:07:51.376495  388252 node_conditions.go:123] node cpu capacity is 2
	I1128 04:07:51.376508  388252 node_conditions.go:105] duration metric: took 4.80925ms to run NodePressure ...
	I1128 04:07:51.376539  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:51.778110  388252 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 04:07:51.786916  388252 kubeadm.go:787] kubelet initialised
	I1128 04:07:51.787002  388252 kubeadm.go:788] duration metric: took 8.859672ms waiting for restarted kubelet to initialise ...
	I1128 04:07:51.787019  388252 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:07:51.799380  388252 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.807214  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807261  388252 pod_ready.go:81] duration metric: took 7.829357ms waiting for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.807274  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807299  388252 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.814516  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814550  388252 pod_ready.go:81] duration metric: took 7.235029ms waiting for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.814569  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814576  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.827729  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827759  388252 pod_ready.go:81] duration metric: took 13.172422ms waiting for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.827768  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827774  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:54.190842  388252 pod_ready.go:102] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:07:56.189656  388252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.189758  388252 pod_ready.go:81] duration metric: took 4.36196703s waiting for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.189779  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196462  388252 pod_ready.go:92] pod "kube-proxy-4cvwh" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.196503  388252 pod_ready.go:81] duration metric: took 6.707028ms waiting for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196517  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:58.590819  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:00.590953  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:02.595296  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:04.592801  388252 pod_ready.go:92] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:08:04.592826  388252 pod_ready.go:81] duration metric: took 8.396301174s waiting for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:04.592839  388252 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:06.618794  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:08.619204  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:11.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:13.618160  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:15.619404  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:17.620107  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:20.118789  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:22.119626  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:24.619088  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:26.619353  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:29.118548  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:31.118625  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:33.122964  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:35.620077  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:38.118800  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:40.618996  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:42.619252  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:45.118801  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:47.118987  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:49.619233  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:52.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:54.120044  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:56.619768  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:59.119321  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:01.119784  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:03.619289  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:06.119695  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:08.618767  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:10.620952  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:13.119086  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:15.121912  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:17.618200  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:19.619428  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:22.117316  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:24.118147  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:26.119945  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:28.619687  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:30.619772  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:33.118414  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:35.622173  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:38.118091  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:40.118723  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:42.119551  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:44.119931  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:46.619572  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:48.620898  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:51.118343  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:53.619215  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:56.119440  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:58.620299  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:01.118313  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:03.618615  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:05.619056  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:07.622475  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:10.117858  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:12.119468  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:14.619203  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:16.619540  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:19.118749  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:21.619618  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:23.620623  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:26.118183  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:28.118246  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:30.618282  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:33.117841  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:35.122904  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:37.619116  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:40.118304  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:42.618264  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:44.621653  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:47.119733  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:49.618284  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:51.619099  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:54.118728  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:56.121041  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:58.618237  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:00.619430  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:03.119263  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:05.619558  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:07.620571  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:10.117924  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:12.118001  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:14.119916  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:16.618621  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:18.620149  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:21.118296  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:23.118614  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:25.119100  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:27.120549  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:29.618264  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:32.119075  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:34.619939  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:37.119561  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:39.119896  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:41.617842  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:43.618594  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:45.618757  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:47.619342  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:49.623012  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:52.119438  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:54.121760  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:56.620252  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:59.120191  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:01.618305  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:03.619616  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:04.593067  388252 pod_ready.go:81] duration metric: took 4m0.000190987s waiting for pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace to be "Ready" ...
	E1128 04:12:04.593121  388252 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 04:12:04.593139  388252 pod_ready.go:38] duration metric: took 4m12.806107308s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:04.593168  388252 kubeadm.go:640] restartCluster took 4m33.957247441s
	W1128 04:12:04.593251  388252 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 04:12:04.593282  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 04:12:18.614553  388252 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.021224516s)
	I1128 04:12:18.614653  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:18.628836  388252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:12:18.640242  388252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:12:18.649879  388252 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:12:18.649930  388252 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 04:12:18.702438  388252 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 04:12:18.702606  388252 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:12:18.867279  388252 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:12:18.867400  388252 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:12:18.867534  388252 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:12:19.120397  388252 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:12:19.122246  388252 out.go:204]   - Generating certificates and keys ...
	I1128 04:12:19.122357  388252 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:12:19.122474  388252 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:12:19.122646  388252 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:12:19.122757  388252 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:12:19.122856  388252 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:12:19.122934  388252 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:12:19.123028  388252 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:12:19.123173  388252 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:12:19.123270  388252 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:12:19.123380  388252 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:12:19.123435  388252 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:12:19.123517  388252 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:12:19.397687  388252 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:12:19.545433  388252 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:12:19.753655  388252 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:12:19.867889  388252 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:12:19.868510  388252 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:12:19.873288  388252 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:12:19.875099  388252 out.go:204]   - Booting up control plane ...
	I1128 04:12:19.875243  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:12:19.875362  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:12:19.875447  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:12:19.890902  388252 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:12:19.891790  388252 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:12:19.891903  388252 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:12:20.033327  388252 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:12:28.539450  388252 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.505311 seconds
	I1128 04:12:28.539554  388252 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:12:28.556290  388252 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:12:29.115246  388252 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:12:29.115517  388252 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-672176 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:12:29.632584  388252 kubeadm.go:322] [bootstrap-token] Using token: fhdku8.6c57fpjso9w7rrxv
	I1128 04:12:29.634185  388252 out.go:204]   - Configuring RBAC rules ...
	I1128 04:12:29.634320  388252 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:12:29.640994  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:12:29.653566  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:12:29.660519  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:12:29.665018  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:12:29.677514  388252 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:12:29.691421  388252 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:12:29.939496  388252 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:12:30.049393  388252 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:12:30.049425  388252 kubeadm.go:322] 
	I1128 04:12:30.049538  388252 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:12:30.049559  388252 kubeadm.go:322] 
	I1128 04:12:30.049652  388252 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:12:30.049683  388252 kubeadm.go:322] 
	I1128 04:12:30.049721  388252 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:12:30.049806  388252 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:12:30.049876  388252 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:12:30.049884  388252 kubeadm.go:322] 
	I1128 04:12:30.049983  388252 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:12:30.050004  388252 kubeadm.go:322] 
	I1128 04:12:30.050076  388252 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:12:30.050088  388252 kubeadm.go:322] 
	I1128 04:12:30.050145  388252 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:12:30.050234  388252 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:12:30.050337  388252 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:12:30.050347  388252 kubeadm.go:322] 
	I1128 04:12:30.050444  388252 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:12:30.050532  388252 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:12:30.050539  388252 kubeadm.go:322] 
	I1128 04:12:30.050633  388252 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fhdku8.6c57fpjso9w7rrxv \
	I1128 04:12:30.050753  388252 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:12:30.050784  388252 kubeadm.go:322] 	--control-plane 
	I1128 04:12:30.050790  388252 kubeadm.go:322] 
	I1128 04:12:30.050888  388252 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:12:30.050898  388252 kubeadm.go:322] 
	I1128 04:12:30.050994  388252 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fhdku8.6c57fpjso9w7rrxv \
	I1128 04:12:30.051118  388252 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:12:30.051556  388252 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:12:30.051597  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:12:30.051611  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:12:30.053491  388252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:12:30.055147  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:12:30.088905  388252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:12:30.132297  388252 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:12:30.132365  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=embed-certs-672176 minikube.k8s.io/updated_at=2023_11_28T04_12_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.132370  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.459401  388252 ops.go:34] apiserver oom_adj: -16
	I1128 04:12:30.459555  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.568049  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:31.166991  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:31.666953  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:32.167174  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:32.666615  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:33.166464  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:33.667438  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:34.167422  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:34.666474  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:35.167309  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:35.667310  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:36.166896  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:36.667030  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:37.167265  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:37.667172  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:38.166893  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:38.667196  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:39.166889  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:39.667205  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:40.167112  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:40.667377  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:41.167422  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:41.666650  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:42.167425  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:42.308007  388252 kubeadm.go:1081] duration metric: took 12.175710221s to wait for elevateKubeSystemPrivileges.
	I1128 04:12:42.308051  388252 kubeadm.go:406] StartCluster complete in 5m11.728054603s
	I1128 04:12:42.308070  388252 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:12:42.308149  388252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:12:42.310104  388252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:12:42.310352  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:12:42.310440  388252 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:12:42.310557  388252 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-672176"
	I1128 04:12:42.310581  388252 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-672176"
	W1128 04:12:42.310588  388252 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:12:42.310601  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:12:42.310668  388252 addons.go:69] Setting default-storageclass=true in profile "embed-certs-672176"
	I1128 04:12:42.310684  388252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-672176"
	I1128 04:12:42.310698  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.311002  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311040  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.311081  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311113  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.311110  388252 addons.go:69] Setting metrics-server=true in profile "embed-certs-672176"
	I1128 04:12:42.311127  388252 addons.go:231] Setting addon metrics-server=true in "embed-certs-672176"
	W1128 04:12:42.311134  388252 addons.go:240] addon metrics-server should already be in state true
	I1128 04:12:42.311167  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.311539  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311584  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.328327  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I1128 04:12:42.328769  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.329061  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I1128 04:12:42.329541  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.329720  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.329731  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.329740  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1128 04:12:42.330179  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.330195  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.330193  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.330557  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.330572  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.330768  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.331035  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.331050  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.331073  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.331151  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.331476  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.332248  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.332359  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.334824  388252 addons.go:231] Setting addon default-storageclass=true in "embed-certs-672176"
	W1128 04:12:42.334849  388252 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:12:42.334882  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.335253  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.335333  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.352633  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40133
	I1128 04:12:42.353356  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.353736  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37797
	I1128 04:12:42.353967  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.353982  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.354364  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.354559  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.355670  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37125
	I1128 04:12:42.355716  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.356215  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.356764  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.356808  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.356772  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.356965  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.356984  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.359122  388252 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:12:42.357414  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.357431  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.360619  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:12:42.360666  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:12:42.360695  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.360632  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.360981  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.361031  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.362951  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.365190  388252 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:12:42.364654  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.365222  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.365254  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.365285  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.365431  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.367020  388252 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:12:42.367079  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:12:42.367146  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.367154  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.367365  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.370570  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.371152  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.371177  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.371181  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.371352  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.371712  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.371881  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.381549  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I1128 04:12:42.382167  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.382667  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.382726  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.383173  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.383387  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.384921  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.385265  388252 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:12:42.385284  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:12:42.385305  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.388576  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.389134  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.389197  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.389203  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.389439  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.389617  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.389783  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.513762  388252 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-672176" context rescaled to 1 replicas
	I1128 04:12:42.513815  388252 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:12:42.515768  388252 out.go:177] * Verifying Kubernetes components...
	I1128 04:12:42.517584  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:42.565623  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:12:42.565648  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:12:42.583220  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:12:42.591345  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:12:42.596578  388252 node_ready.go:35] waiting up to 6m0s for node "embed-certs-672176" to be "Ready" ...
	I1128 04:12:42.596679  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:12:42.615808  388252 node_ready.go:49] node "embed-certs-672176" has status "Ready":"True"
	I1128 04:12:42.615836  388252 node_ready.go:38] duration metric: took 19.228862ms waiting for node "embed-certs-672176" to be "Ready" ...
	I1128 04:12:42.615848  388252 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:42.637885  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:12:42.637913  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:12:42.667328  388252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:42.863842  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:12:42.863897  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:12:42.947911  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:12:44.507109  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.923846344s)
	I1128 04:12:44.507207  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.507227  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.507634  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.507655  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.507667  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.507677  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.509371  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:44.509455  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.509479  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.585867  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.585899  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.586220  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.586243  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.586371  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:44.829833  388252 pod_ready.go:102] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:45.125413  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.534026387s)
	I1128 04:12:45.125477  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.125492  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.125490  388252 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.528780545s)
	I1128 04:12:45.125516  388252 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1128 04:12:45.125839  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.125859  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.125874  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.125883  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.126171  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.126184  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.126201  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.429252  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.481263549s)
	I1128 04:12:45.429311  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.429327  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.429703  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.429772  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.429787  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.429797  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.429727  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.430078  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.430119  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.430135  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.430149  388252 addons.go:467] Verifying addon metrics-server=true in "embed-certs-672176"
	I1128 04:12:45.432135  388252 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 04:12:45.433222  388252 addons.go:502] enable addons completed in 3.122792003s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 04:12:46.830144  388252 pod_ready.go:102] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:47.831025  388252 pod_ready.go:92] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.831057  388252 pod_ready.go:81] duration metric: took 5.163697448s waiting for pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.831067  388252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.837379  388252 pod_ready.go:92] pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.837400  388252 pod_ready.go:81] duration metric: took 6.325699ms waiting for pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.837411  388252 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.842711  388252 pod_ready.go:92] pod "etcd-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.842736  388252 pod_ready.go:81] duration metric: took 5.316988ms waiting for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.842744  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.848771  388252 pod_ready.go:92] pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.848792  388252 pod_ready.go:81] duration metric: took 6.042201ms waiting for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.848801  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.854704  388252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.854729  388252 pod_ready.go:81] duration metric: took 5.922154ms waiting for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.854737  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7srf" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.227290  388252 pod_ready.go:92] pod "kube-proxy-q7srf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:48.227318  388252 pod_ready.go:81] duration metric: took 372.573682ms waiting for pod "kube-proxy-q7srf" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.227331  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.627054  388252 pod_ready.go:92] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:48.627088  388252 pod_ready.go:81] duration metric: took 399.749681ms waiting for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.627097  388252 pod_ready.go:38] duration metric: took 6.011238284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:48.627114  388252 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:12:48.627164  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:12:48.645283  388252 api_server.go:72] duration metric: took 6.131420029s to wait for apiserver process to appear ...
	I1128 04:12:48.645317  388252 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:12:48.645345  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:12:48.651616  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I1128 04:12:48.653231  388252 api_server.go:141] control plane version: v1.28.4
	I1128 04:12:48.653252  388252 api_server.go:131] duration metric: took 7.92759ms to wait for apiserver health ...
	I1128 04:12:48.653262  388252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:12:48.831400  388252 system_pods.go:59] 9 kube-system pods found
	I1128 04:12:48.831430  388252 system_pods.go:61] "coredns-5dd5756b68-48xtx" [1229f57f-a420-4c63-ae05-8a051f556bbd] Running
	I1128 04:12:48.831435  388252 system_pods.go:61] "coredns-5dd5756b68-qws7p" [19e86a95-23a4-4222-955d-9c560db64c80] Running
	I1128 04:12:48.831439  388252 system_pods.go:61] "etcd-embed-certs-672176" [6591bb2b-2d10-4f8b-9d1a-919b39590717] Running
	I1128 04:12:48.831443  388252 system_pods.go:61] "kube-apiserver-embed-certs-672176" [0ddbb8ba-804f-43ef-a803-62570732f165] Running
	I1128 04:12:48.831447  388252 system_pods.go:61] "kube-controller-manager-embed-certs-672176" [8dcb6ffa-1e26-420f-b385-e145cf24282a] Running
	I1128 04:12:48.831451  388252 system_pods.go:61] "kube-proxy-q7srf" [a2390c61-7f2a-40ac-ad4c-c47e78a3eb90] Running
	I1128 04:12:48.831454  388252 system_pods.go:61] "kube-scheduler-embed-certs-672176" [973e06dd-2716-40fe-99ed-cf7844cd22b7] Running
	I1128 04:12:48.831461  388252 system_pods.go:61] "metrics-server-57f55c9bc5-ppnxv" [1c86fe3d-4460-4777-a7d7-57b1f6aad5f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:12:48.831466  388252 system_pods.go:61] "storage-provisioner" [3304cb38-897a-482f-9a9d-9e287aca2ce4] Running
	I1128 04:12:48.831473  388252 system_pods.go:74] duration metric: took 178.206375ms to wait for pod list to return data ...
	I1128 04:12:48.831481  388252 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:12:49.027724  388252 default_sa.go:45] found service account: "default"
	I1128 04:12:49.027754  388252 default_sa.go:55] duration metric: took 196.266769ms for default service account to be created ...
	I1128 04:12:49.027762  388252 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:12:49.231633  388252 system_pods.go:86] 9 kube-system pods found
	I1128 04:12:49.231663  388252 system_pods.go:89] "coredns-5dd5756b68-48xtx" [1229f57f-a420-4c63-ae05-8a051f556bbd] Running
	I1128 04:12:49.231669  388252 system_pods.go:89] "coredns-5dd5756b68-qws7p" [19e86a95-23a4-4222-955d-9c560db64c80] Running
	I1128 04:12:49.231673  388252 system_pods.go:89] "etcd-embed-certs-672176" [6591bb2b-2d10-4f8b-9d1a-919b39590717] Running
	I1128 04:12:49.231677  388252 system_pods.go:89] "kube-apiserver-embed-certs-672176" [0ddbb8ba-804f-43ef-a803-62570732f165] Running
	I1128 04:12:49.231682  388252 system_pods.go:89] "kube-controller-manager-embed-certs-672176" [8dcb6ffa-1e26-420f-b385-e145cf24282a] Running
	I1128 04:12:49.231687  388252 system_pods.go:89] "kube-proxy-q7srf" [a2390c61-7f2a-40ac-ad4c-c47e78a3eb90] Running
	I1128 04:12:49.231691  388252 system_pods.go:89] "kube-scheduler-embed-certs-672176" [973e06dd-2716-40fe-99ed-cf7844cd22b7] Running
	I1128 04:12:49.231697  388252 system_pods.go:89] "metrics-server-57f55c9bc5-ppnxv" [1c86fe3d-4460-4777-a7d7-57b1f6aad5f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:12:49.231702  388252 system_pods.go:89] "storage-provisioner" [3304cb38-897a-482f-9a9d-9e287aca2ce4] Running
	I1128 04:12:49.231712  388252 system_pods.go:126] duration metric: took 203.944338ms to wait for k8s-apps to be running ...
	I1128 04:12:49.231724  388252 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:12:49.231781  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:49.247634  388252 system_svc.go:56] duration metric: took 15.898994ms WaitForService to wait for kubelet.
	I1128 04:12:49.247662  388252 kubeadm.go:581] duration metric: took 6.733807391s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:12:49.247681  388252 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:12:49.426882  388252 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:12:49.426916  388252 node_conditions.go:123] node cpu capacity is 2
	I1128 04:12:49.426931  388252 node_conditions.go:105] duration metric: took 179.246183ms to run NodePressure ...
	I1128 04:12:49.426946  388252 start.go:228] waiting for startup goroutines ...
	I1128 04:12:49.426954  388252 start.go:233] waiting for cluster config update ...
	I1128 04:12:49.426965  388252 start.go:242] writing updated cluster config ...
	I1128 04:12:49.427242  388252 ssh_runner.go:195] Run: rm -f paused
	I1128 04:12:49.477142  388252 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:12:49.479448  388252 out.go:177] * Done! kubectl is now configured to use "embed-certs-672176" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 03:57:45 UTC, ends at Tue 2023-11-28 04:14:02 UTC. --
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.419005656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4b9f5953-a32a-49d9-8884-c41bfe870d51 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.420875407Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3e1a14e7-a7ba-41cb-9bea-810a16984efb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.421323941Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701144842421306642,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=3e1a14e7-a7ba-41cb-9bea-810a16984efb name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.422241271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ee73521b-cb4e-433b-a677-ac4f3c9ab23d name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.422288502Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ee73521b-cb4e-433b-a677-ac4f3c9ab23d name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.422464691Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ecbe1433454e572685cd5dc66e924030a471daa9cc12657a01ee105e3400bfb4,PodSandboxId:fa01086d74baa278002b4dec633701f19824de6c6610d907804f6d53f45b8e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144229479578775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed59bc28-66f5-44f8-9ff5-d5be69e0049a,},Annotations:map[string]string{io.kubernetes.container.hash: 8e76b141,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a36dd35c0d679bc4feae2b55f722fb9e5d94222ccdfc64f0534bbded07a159,PodSandboxId:b1d5aa0a1633946ba10561b5f4b9861d92fe511d6e02d29bf4070797edb47cf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701144228819892268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpjnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ef95f3-b9bc-4936-a2e7-398191b6bed5,},Annotations:map[string]string{io.kubernetes.container.hash: c6c5f81f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf61f1f828a44c23c4e5f82409576bf12884717baaec81b789ae3f719e5fec20,PodSandboxId:a7bec5579a274c5d95675941872fa6da07dc3b739bd82cf2f2481c34572f66d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701144227950257258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-529cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07d1ac-6461-451e-a1bf-4a5493d7d453,},Annotations:map[string]string{io.kubernetes.container.hash: d8dad01b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:108096398e4416dcf3dea8ffc0057a1399647e61d177533c5ddf4b01ae3b4ed3,PodSandboxId:fc9ca2bef594fb9c1142d04fbcd8bbdd0d73cddf51be4f963827638d789c6ce2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701144201610291676,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35cc95d33d1e82251c247e4c3039876,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f6164f8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731933b8d59f91b64564a03132ad4f64897116ded4b3ce17c719c8f3d315fb0a,PodSandboxId:2b639245676bdbaba2d743d027ab2c10c93fa3fc7e7e253cd4d83441e758f2e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701144200561582080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fba9d2d49ee66313b5cc9a6e11f4cc83069cb4e66b9f45340c6a05df4ea1ef2,PodSandboxId:4406cf30e5698951c86d82a2d13e97a26ed67affd0738799478173ca906394ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701144200605866435,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb7bc23ae3bb9f8c73ebeaeb030df1c2c98b27acdf5ffb7c293a8d16cdc386d0,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701144199840191427,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92a27c0ce26421760d35ed955d182ad61fa04f534a73d5900d9d04d95af39a4,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701143896787387489,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ee73521b-cb4e-433b-a677-ac4f3c9ab23d name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.425014299Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=90ed6bd5-80ec-4738-84fa-6125e8c85f04 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.425236292Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5ebb14021207389f31c2896536939bcc27d1e04ebcdbadf6418c12a59adc4916,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-wlfq5,Uid:64cff3b8-b297-425e-91bc-26e7ca091bfc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701144229398214240,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-wlfq5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 64cff3b8-b297-425e-91bc-26e7ca091bfc,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T04:03:49.055280068Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fa01086d74baa278002b4dec633701f19824de6c6610d907804f6d53f45b8e89,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:ed59bc28-66f5-44f8-9ff5-d5be69e004
9a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701144228812405378,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed59bc28-66f5-44f8-9ff5-d5be69e0049a,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\
"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-11-28T04:03:48.455511452Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a7bec5579a274c5d95675941872fa6da07dc3b739bd82cf2f2481c34572f66d3,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-529cg,Uid:1c07d1ac-6461-451e-a1bf-4a5493d7d453,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701144227448809398,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-529cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07d1ac-6461-451e-a1bf-4a5493d7d453,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T04:03:46.209204858Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b1d5aa0a1633946ba10561b5f4b9861d92fe511d6e02d29bf4070797edb47cf6,Metadata:&PodSandboxMetadata{Name:kube-proxy-fpjnf,Uid:62ef95f3-b9bc-4936-a2e
7-398191b6bed5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701144226424961177,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-fpjnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ef95f3-b9bc-4936-a2e7-398191b6bed5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-11-28T04:03:46.082049783Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b639245676bdbaba2d743d027ab2c10c93fa3fc7e7e253cd4d83441e758f2e7,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-666657,Uid:7376ddb4f190a0ded9394063437bcb4e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701144199860096407,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 7376ddb4f190a0ded9394063437bcb4e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7376ddb4f190a0ded9394063437bcb4e,kubernetes.io/config.seen: 2023-11-28T04:03:19.339308991Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fc9ca2bef594fb9c1142d04fbcd8bbdd0d73cddf51be4f963827638d789c6ce2,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-666657,Uid:e35cc95d33d1e82251c247e4c3039876,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701144199852111081,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35cc95d33d1e82251c247e4c3039876,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e35cc95d33d1e82251c247e4c3039876,kubernetes.io/config.seen: 2023-11-28T04:03:19.339305726Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4406cf30e5698951c86d82a2d13e
97a26ed67affd0738799478173ca906394ee,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-666657,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701144199790296354,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-11-28T04:03:19.339301152Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-666657,Uid:b2202267222584f9d33fefa0997a4eab,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1701143896173447580,Labels:map[string]string{component: kube-apiserver,io
.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b2202267222584f9d33fefa0997a4eab,kubernetes.io/config.seen: 2023-11-28T03:58:15.694760368Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=90ed6bd5-80ec-4738-84fa-6125e8c85f04 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.426178678Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c2db4feb-89ac-446a-b197-783505eaf57c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.426230416Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c2db4feb-89ac-446a-b197-783505eaf57c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.426408061Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ecbe1433454e572685cd5dc66e924030a471daa9cc12657a01ee105e3400bfb4,PodSandboxId:fa01086d74baa278002b4dec633701f19824de6c6610d907804f6d53f45b8e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144229479578775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed59bc28-66f5-44f8-9ff5-d5be69e0049a,},Annotations:map[string]string{io.kubernetes.container.hash: 8e76b141,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a36dd35c0d679bc4feae2b55f722fb9e5d94222ccdfc64f0534bbded07a159,PodSandboxId:b1d5aa0a1633946ba10561b5f4b9861d92fe511d6e02d29bf4070797edb47cf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701144228819892268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpjnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ef95f3-b9bc-4936-a2e7-398191b6bed5,},Annotations:map[string]string{io.kubernetes.container.hash: c6c5f81f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf61f1f828a44c23c4e5f82409576bf12884717baaec81b789ae3f719e5fec20,PodSandboxId:a7bec5579a274c5d95675941872fa6da07dc3b739bd82cf2f2481c34572f66d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701144227950257258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-529cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07d1ac-6461-451e-a1bf-4a5493d7d453,},Annotations:map[string]string{io.kubernetes.container.hash: d8dad01b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:108096398e4416dcf3dea8ffc0057a1399647e61d177533c5ddf4b01ae3b4ed3,PodSandboxId:fc9ca2bef594fb9c1142d04fbcd8bbdd0d73cddf51be4f963827638d789c6ce2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701144201610291676,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35cc95d33d1e82251c247e4c3039876,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f6164f8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731933b8d59f91b64564a03132ad4f64897116ded4b3ce17c719c8f3d315fb0a,PodSandboxId:2b639245676bdbaba2d743d027ab2c10c93fa3fc7e7e253cd4d83441e758f2e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701144200561582080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fba9d2d49ee66313b5cc9a6e11f4cc83069cb4e66b9f45340c6a05df4ea1ef2,PodSandboxId:4406cf30e5698951c86d82a2d13e97a26ed67affd0738799478173ca906394ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701144200605866435,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb7bc23ae3bb9f8c73ebeaeb030df1c2c98b27acdf5ffb7c293a8d16cdc386d0,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701144199840191427,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92a27c0ce26421760d35ed955d182ad61fa04f534a73d5900d9d04d95af39a4,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701143896787387489,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c2db4feb-89ac-446a-b197-783505eaf57c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.468792004Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0aa21888-6ab7-4d0e-8f36-ffff87c93526 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.468857034Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0aa21888-6ab7-4d0e-8f36-ffff87c93526 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.469983769Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=2d40dcc5-5324-404b-b485-18a0637e87b7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.470418087Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701144842470400886,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=2d40dcc5-5324-404b-b485-18a0637e87b7 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.471128964Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=92cbe8a9-455e-4fb5-b680-d825ca62cf46 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.471184261Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=92cbe8a9-455e-4fb5-b680-d825ca62cf46 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.471391287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ecbe1433454e572685cd5dc66e924030a471daa9cc12657a01ee105e3400bfb4,PodSandboxId:fa01086d74baa278002b4dec633701f19824de6c6610d907804f6d53f45b8e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144229479578775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed59bc28-66f5-44f8-9ff5-d5be69e0049a,},Annotations:map[string]string{io.kubernetes.container.hash: 8e76b141,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a36dd35c0d679bc4feae2b55f722fb9e5d94222ccdfc64f0534bbded07a159,PodSandboxId:b1d5aa0a1633946ba10561b5f4b9861d92fe511d6e02d29bf4070797edb47cf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701144228819892268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpjnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ef95f3-b9bc-4936-a2e7-398191b6bed5,},Annotations:map[string]string{io.kubernetes.container.hash: c6c5f81f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf61f1f828a44c23c4e5f82409576bf12884717baaec81b789ae3f719e5fec20,PodSandboxId:a7bec5579a274c5d95675941872fa6da07dc3b739bd82cf2f2481c34572f66d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701144227950257258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-529cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07d1ac-6461-451e-a1bf-4a5493d7d453,},Annotations:map[string]string{io.kubernetes.container.hash: d8dad01b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:108096398e4416dcf3dea8ffc0057a1399647e61d177533c5ddf4b01ae3b4ed3,PodSandboxId:fc9ca2bef594fb9c1142d04fbcd8bbdd0d73cddf51be4f963827638d789c6ce2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701144201610291676,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35cc95d33d1e82251c247e4c3039876,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f6164f8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731933b8d59f91b64564a03132ad4f64897116ded4b3ce17c719c8f3d315fb0a,PodSandboxId:2b639245676bdbaba2d743d027ab2c10c93fa3fc7e7e253cd4d83441e758f2e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701144200561582080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fba9d2d49ee66313b5cc9a6e11f4cc83069cb4e66b9f45340c6a05df4ea1ef2,PodSandboxId:4406cf30e5698951c86d82a2d13e97a26ed67affd0738799478173ca906394ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701144200605866435,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb7bc23ae3bb9f8c73ebeaeb030df1c2c98b27acdf5ffb7c293a8d16cdc386d0,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701144199840191427,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92a27c0ce26421760d35ed955d182ad61fa04f534a73d5900d9d04d95af39a4,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701143896787387489,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=92cbe8a9-455e-4fb5-b680-d825ca62cf46 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.510597744Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7d8a0457-6c25-4160-ac88-93698a9783bd name=/runtime.v1.RuntimeService/Version
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.510760804Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7d8a0457-6c25-4160-ac88-93698a9783bd name=/runtime.v1.RuntimeService/Version
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.512310353Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=80893482-2c4a-456c-9703-df0b7f17a684 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.512818082Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701144842512802257,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=80893482-2c4a-456c-9703-df0b7f17a684 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.514379019Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f7122a2e-0625-4ee9-bb1a-02aebd6007d7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.514429431Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f7122a2e-0625-4ee9-bb1a-02aebd6007d7 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:14:02 old-k8s-version-666657 crio[716]: time="2023-11-28 04:14:02.514771543Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ecbe1433454e572685cd5dc66e924030a471daa9cc12657a01ee105e3400bfb4,PodSandboxId:fa01086d74baa278002b4dec633701f19824de6c6610d907804f6d53f45b8e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144229479578775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed59bc28-66f5-44f8-9ff5-d5be69e0049a,},Annotations:map[string]string{io.kubernetes.container.hash: 8e76b141,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a36dd35c0d679bc4feae2b55f722fb9e5d94222ccdfc64f0534bbded07a159,PodSandboxId:b1d5aa0a1633946ba10561b5f4b9861d92fe511d6e02d29bf4070797edb47cf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701144228819892268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpjnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ef95f3-b9bc-4936-a2e7-398191b6bed5,},Annotations:map[string]string{io.kubernetes.container.hash: c6c5f81f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf61f1f828a44c23c4e5f82409576bf12884717baaec81b789ae3f719e5fec20,PodSandboxId:a7bec5579a274c5d95675941872fa6da07dc3b739bd82cf2f2481c34572f66d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701144227950257258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-529cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07d1ac-6461-451e-a1bf-4a5493d7d453,},Annotations:map[string]string{io.kubernetes.container.hash: d8dad01b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:108096398e4416dcf3dea8ffc0057a1399647e61d177533c5ddf4b01ae3b4ed3,PodSandboxId:fc9ca2bef594fb9c1142d04fbcd8bbdd0d73cddf51be4f963827638d789c6ce2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701144201610291676,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35cc95d33d1e82251c247e4c3039876,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f6164f8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731933b8d59f91b64564a03132ad4f64897116ded4b3ce17c719c8f3d315fb0a,PodSandboxId:2b639245676bdbaba2d743d027ab2c10c93fa3fc7e7e253cd4d83441e758f2e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701144200561582080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fba9d2d49ee66313b5cc9a6e11f4cc83069cb4e66b9f45340c6a05df4ea1ef2,PodSandboxId:4406cf30e5698951c86d82a2d13e97a26ed67affd0738799478173ca906394ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701144200605866435,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb7bc23ae3bb9f8c73ebeaeb030df1c2c98b27acdf5ffb7c293a8d16cdc386d0,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701144199840191427,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92a27c0ce26421760d35ed955d182ad61fa04f534a73d5900d9d04d95af39a4,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701143896787387489,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f7122a2e-0625-4ee9-bb1a-02aebd6007d7 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ecbe1433454e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   10 minutes ago      Running             storage-provisioner       0                   fa01086d74baa       storage-provisioner
	a1a36dd35c0d6       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   10 minutes ago      Running             kube-proxy                0                   b1d5aa0a16339       kube-proxy-fpjnf
	bf61f1f828a44       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   10 minutes ago      Running             coredns                   0                   a7bec5579a274       coredns-5644d7b6d9-529cg
	108096398e441       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   10 minutes ago      Running             etcd                      0                   fc9ca2bef594f       etcd-old-k8s-version-666657
	3fba9d2d49ee6       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   10 minutes ago      Running             kube-scheduler            0                   4406cf30e5698       kube-scheduler-old-k8s-version-666657
	731933b8d59f9       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   10 minutes ago      Running             kube-controller-manager   0                   2b639245676bd       kube-controller-manager-old-k8s-version-666657
	eb7bc23ae3bb9       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   10 minutes ago      Running             kube-apiserver            1                   6ac5768cbf19e       kube-apiserver-old-k8s-version-666657
	d92a27c0ce264       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   15 minutes ago      Exited              kube-apiserver            0                   6ac5768cbf19e       kube-apiserver-old-k8s-version-666657
	
	* 
	* ==> coredns [bf61f1f828a44c23c4e5f82409576bf12884717baaec81b789ae3f719e5fec20] <==
	* .:53
	2023-11-28T04:03:48.303Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-11-28T04:03:48.303Z [INFO] CoreDNS-1.6.2
	2023-11-28T04:03:48.303Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-666657
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-666657
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=old-k8s-version-666657
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T04_03_31_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 04:03:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 04:13:26 +0000   Tue, 28 Nov 2023 04:03:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 04:13:26 +0000   Tue, 28 Nov 2023 04:03:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 04:13:26 +0000   Tue, 28 Nov 2023 04:03:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 04:13:26 +0000   Tue, 28 Nov 2023 04:03:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.7
	  Hostname:    old-k8s-version-666657
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 0539a6bc6c654b8fa43b48e960f31234
	 System UUID:                0539a6bc-6c65-4b8f-a43b-48e960f31234
	 Boot ID:                    c7565d7d-520e-4ee6-b523-8de18c606738
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-529cg                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     10m
	  kube-system                etcd-old-k8s-version-666657                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m25s
	  kube-system                kube-apiserver-old-k8s-version-666657             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                kube-controller-manager-old-k8s-version-666657    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                kube-proxy-fpjnf                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                kube-scheduler-old-k8s-version-666657             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m38s
	  kube-system                metrics-server-74d5856cc6-wlfq5                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet, old-k8s-version-666657     Node old-k8s-version-666657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet, old-k8s-version-666657     Node old-k8s-version-666657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet, old-k8s-version-666657     Node old-k8s-version-666657 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                kube-proxy, old-k8s-version-666657  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov28 03:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069990] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.757104] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.365222] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154274] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.624689] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.901266] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.119101] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.224542] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.152090] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.263743] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[Nov28 03:58] systemd-fstab-generator[1033]: Ignoring "noauto" for root device
	[  +0.423782] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.918472] kauditd_printk_skb: 13 callbacks suppressed
	[Nov28 03:59] kauditd_printk_skb: 4 callbacks suppressed
	[Nov28 04:03] systemd-fstab-generator[3102]: Ignoring "noauto" for root device
	[  +1.171577] kauditd_printk_skb: 6 callbacks suppressed
	[ +34.319451] kauditd_printk_skb: 13 callbacks suppressed
	
	* 
	* ==> etcd [108096398e4416dcf3dea8ffc0057a1399647e61d177533c5ddf4b01ae3b4ed3] <==
	* 2023-11-28 04:03:21.759200 I | raft: newRaft 856b77cd5251110c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-11-28 04:03:21.759204 I | raft: 856b77cd5251110c became follower at term 1
	2023-11-28 04:03:21.766569 W | auth: simple token is not cryptographically signed
	2023-11-28 04:03:21.771452 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-11-28 04:03:21.772839 I | etcdserver: 856b77cd5251110c as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-28 04:03:21.773299 I | etcdserver/membership: added member 856b77cd5251110c [https://192.168.50.7:2380] to cluster b162f841703ff885
	2023-11-28 04:03:21.773633 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-28 04:03:21.773818 I | embed: listening for metrics on http://192.168.50.7:2381
	2023-11-28 04:03:21.773986 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-28 04:03:22.259888 I | raft: 856b77cd5251110c is starting a new election at term 1
	2023-11-28 04:03:22.260086 I | raft: 856b77cd5251110c became candidate at term 2
	2023-11-28 04:03:22.260210 I | raft: 856b77cd5251110c received MsgVoteResp from 856b77cd5251110c at term 2
	2023-11-28 04:03:22.260240 I | raft: 856b77cd5251110c became leader at term 2
	2023-11-28 04:03:22.260334 I | raft: raft.node: 856b77cd5251110c elected leader 856b77cd5251110c at term 2
	2023-11-28 04:03:22.260928 I | etcdserver: published {Name:old-k8s-version-666657 ClientURLs:[https://192.168.50.7:2379]} to cluster b162f841703ff885
	2023-11-28 04:03:22.261118 I | embed: ready to serve client requests
	2023-11-28 04:03:22.261140 I | embed: ready to serve client requests
	2023-11-28 04:03:22.262342 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-28 04:03:22.262400 I | embed: serving client requests on 192.168.50.7:2379
	2023-11-28 04:03:22.262482 I | etcdserver: setting up the initial cluster version to 3.3
	2023-11-28 04:03:22.263533 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-28 04:03:22.263649 I | etcdserver/api: enabled capabilities for version 3.3
	2023-11-28 04:03:47.881360 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/metrics-server\" " with result "range_response_count:0 size:5" took too long (296.317166ms) to execute
	2023-11-28 04:13:23.038054 I | mvcc: store.index: compact 661
	2023-11-28 04:13:23.040618 I | mvcc: finished scheduled compaction at 661 (took 2.008371ms)
	
	* 
	* ==> kernel <==
	*  04:14:02 up 16 min,  0 users,  load average: 0.19, 0.27, 0.23
	Linux old-k8s-version-666657 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [d92a27c0ce26421760d35ed955d182ad61fa04f534a73d5900d9d04d95af39a4] <==
	* W1128 04:03:16.402852       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.402894       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.402969       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403006       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403085       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403792       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403795       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403819       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403838       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403890       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403955       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404016       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404073       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404127       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404155       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404240       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404267       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404292       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404325       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404353       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404414       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403858       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403874       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:17.688285       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:17.696531       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [eb7bc23ae3bb9f8c73ebeaeb030df1c2c98b27acdf5ffb7c293a8d16cdc386d0] <==
	* I1128 04:06:49.753494       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 04:06:49.753610       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 04:06:49.753747       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:06:49.753756       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:08:27.249311       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 04:08:27.249625       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 04:08:27.249871       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:08:27.249919       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:09:27.250495       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 04:09:27.250848       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 04:09:27.250944       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:09:27.251017       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:11:27.251383       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 04:11:27.251532       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 04:11:27.251610       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:11:27.251617       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:13:27.253530       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 04:13:27.254049       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 04:13:27.254223       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:13:27.254266       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [731933b8d59f91b64564a03132ad4f64897116ded4b3ce17c719c8f3d315fb0a] <==
	* E1128 04:07:48.561754       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:08:02.603179       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:08:18.813776       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:08:34.605923       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:08:49.066355       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:09:06.609107       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:09:19.318480       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:09:38.611932       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:09:49.570493       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:10:10.614406       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:10:19.823511       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:10:42.616435       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:10:50.075993       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:11:14.618764       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:11:20.328246       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:11:46.620900       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:11:50.580520       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:12:18.623148       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:12:20.832572       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:12:50.625003       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:12:51.084628       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1128 04:13:21.336566       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:13:22.627077       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:13:51.588876       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:13:54.629002       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [a1a36dd35c0d679bc4feae2b55f722fb9e5d94222ccdfc64f0534bbded07a159] <==
	* W1128 04:03:49.179393       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1128 04:03:49.189741       1 node.go:135] Successfully retrieved node IP: 192.168.50.7
	I1128 04:03:49.189832       1 server_others.go:149] Using iptables Proxier.
	I1128 04:03:49.191007       1 server.go:529] Version: v1.16.0
	I1128 04:03:49.194972       1 config.go:131] Starting endpoints config controller
	I1128 04:03:49.195102       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1128 04:03:49.196030       1 config.go:313] Starting service config controller
	I1128 04:03:49.196098       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1128 04:03:49.299135       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1128 04:03:49.299403       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [3fba9d2d49ee66313b5cc9a6e11f4cc83069cb4e66b9f45340c6a05df4ea1ef2] <==
	* W1128 04:03:26.247819       1 authentication.go:79] Authentication is disabled
	I1128 04:03:26.247952       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1128 04:03:26.249523       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1128 04:03:26.307045       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 04:03:26.307185       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 04:03:26.307288       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 04:03:26.307356       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 04:03:26.311269       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 04:03:26.311363       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 04:03:26.311409       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 04:03:26.311455       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 04:03:26.315409       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 04:03:26.316940       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 04:03:26.316960       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 04:03:27.311079       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 04:03:27.317262       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 04:03:27.317398       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 04:03:27.318284       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 04:03:27.319837       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 04:03:27.320362       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 04:03:27.321609       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 04:03:27.325998       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 04:03:27.327193       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 04:03:27.329373       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 04:03:27.330609       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 03:57:45 UTC, ends at Tue 2023-11-28 04:14:03 UTC. --
	Nov 28 04:09:37 old-k8s-version-666657 kubelet[3121]: E1128 04:09:37.395291    3121 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 28 04:09:37 old-k8s-version-666657 kubelet[3121]: E1128 04:09:37.395391    3121 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 28 04:09:37 old-k8s-version-666657 kubelet[3121]: E1128 04:09:37.395498    3121 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 28 04:09:37 old-k8s-version-666657 kubelet[3121]: E1128 04:09:37.395576    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Nov 28 04:09:48 old-k8s-version-666657 kubelet[3121]: E1128 04:09:48.352340    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:10:01 old-k8s-version-666657 kubelet[3121]: E1128 04:10:01.354010    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:10:13 old-k8s-version-666657 kubelet[3121]: E1128 04:10:13.352527    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:10:24 old-k8s-version-666657 kubelet[3121]: E1128 04:10:24.352203    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:10:37 old-k8s-version-666657 kubelet[3121]: E1128 04:10:37.352271    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:10:49 old-k8s-version-666657 kubelet[3121]: E1128 04:10:49.352387    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:11:03 old-k8s-version-666657 kubelet[3121]: E1128 04:11:03.352878    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:11:15 old-k8s-version-666657 kubelet[3121]: E1128 04:11:15.352861    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:11:30 old-k8s-version-666657 kubelet[3121]: E1128 04:11:30.352648    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:11:45 old-k8s-version-666657 kubelet[3121]: E1128 04:11:45.352322    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:11:58 old-k8s-version-666657 kubelet[3121]: E1128 04:11:58.352576    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:12:09 old-k8s-version-666657 kubelet[3121]: E1128 04:12:09.353892    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:12:20 old-k8s-version-666657 kubelet[3121]: E1128 04:12:20.352624    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:12:33 old-k8s-version-666657 kubelet[3121]: E1128 04:12:33.352519    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:12:47 old-k8s-version-666657 kubelet[3121]: E1128 04:12:47.353946    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:13:01 old-k8s-version-666657 kubelet[3121]: E1128 04:13:01.352786    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:13:12 old-k8s-version-666657 kubelet[3121]: E1128 04:13:12.352355    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:13:19 old-k8s-version-666657 kubelet[3121]: E1128 04:13:19.436620    3121 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Nov 28 04:13:27 old-k8s-version-666657 kubelet[3121]: E1128 04:13:27.352782    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:13:40 old-k8s-version-666657 kubelet[3121]: E1128 04:13:40.354864    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:13:52 old-k8s-version-666657 kubelet[3121]: E1128 04:13:52.353067    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [ecbe1433454e572685cd5dc66e924030a471daa9cc12657a01ee105e3400bfb4] <==
	* I1128 04:03:49.777894       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 04:03:49.788577       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 04:03:49.788788       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 04:03:49.797356       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 04:03:49.798045       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56d67361-7fdc-4ab6-9363-0dc1d8dccb58", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-666657_04924d75-d25d-4ae8-ac80-12122f51609e became leader
	I1128 04:03:49.798111       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-666657_04924d75-d25d-4ae8-ac80-12122f51609e!
	I1128 04:03:49.898206       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-666657_04924d75-d25d-4ae8-ac80-12122f51609e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-666657 -n old-k8s-version-666657
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-666657 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-wlfq5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-666657 describe pod metrics-server-74d5856cc6-wlfq5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-666657 describe pod metrics-server-74d5856cc6-wlfq5: exit status 1 (67.9588ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-wlfq5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-666657 describe pod metrics-server-74d5856cc6-wlfq5: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (378.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-725962 -n default-k8s-diff-port-725962
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-28 04:17:44.389412342 +0000 UTC m=+5811.564386492
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-725962 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-725962 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.531µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-725962 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-725962 -n default-k8s-diff-port-725962
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-725962 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-725962 logs -n 25: (1.390598222s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-222348             | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-725962  | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-666657             | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-666657                              | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC | 28 Nov 23 04:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-644411                  | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-644411 --memory=2200 --alsologtostderr   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 03:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-222348                  | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-725962       | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-644411 sudo                              | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p                                                     | disable-driver-mounts-846967 | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | disable-driver-mounts-846967                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-672176            | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC | 28 Nov 23 03:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-672176                 | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC | 28 Nov 23 04:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-666657                              | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 04:16 UTC | 28 Nov 23 04:16 UTC |
	| delete  | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 04:17 UTC | 28 Nov 23 04:17 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:02:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:02:20.007599  388252 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:02:20.007767  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.007777  388252 out.go:309] Setting ErrFile to fd 2...
	I1128 04:02:20.007785  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.008096  388252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 04:02:20.008843  388252 out.go:303] Setting JSON to false
	I1128 04:02:20.010310  388252 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9890,"bootTime":1701134250,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 04:02:20.010407  388252 start.go:138] virtualization: kvm guest
	I1128 04:02:20.013087  388252 out.go:177] * [embed-certs-672176] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 04:02:20.014598  388252 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:02:20.014660  388252 notify.go:220] Checking for updates...
	I1128 04:02:20.015986  388252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:02:20.017211  388252 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:20.018519  388252 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 04:02:20.019955  388252 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 04:02:20.021210  388252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:02:20.023191  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:02:20.023899  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.023964  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.042617  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
	I1128 04:02:20.043095  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.043705  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.043736  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.044107  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.044324  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.044601  388252 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:02:20.044913  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.044954  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.060572  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I1128 04:02:20.061089  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.061641  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.061662  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.062005  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.062271  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.099905  388252 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 04:02:20.101319  388252 start.go:298] selected driver: kvm2
	I1128 04:02:20.101341  388252 start.go:902] validating driver "kvm2" against &{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.101493  388252 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:02:20.102582  388252 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.102689  388252 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 04:02:20.119550  388252 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 04:02:20.120061  388252 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 04:02:20.120161  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:02:20.120182  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:20.120200  388252 start_flags.go:323] config:
	{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.120453  388252 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.122000  388252 out.go:177] * Starting control plane node embed-certs-672176 in cluster embed-certs-672176
	I1128 04:02:20.123169  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:02:20.123226  388252 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 04:02:20.123238  388252 cache.go:56] Caching tarball of preloaded images
	I1128 04:02:20.123336  388252 preload.go:174] Found /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 04:02:20.123349  388252 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 04:02:20.123483  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:02:20.123764  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:02:20.123841  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 53.317µs
	I1128 04:02:20.123861  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:02:20.123898  388252 fix.go:54] fixHost starting: 
	I1128 04:02:20.124308  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.124355  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.139372  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I1128 04:02:20.139973  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.140502  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.140524  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.141047  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.141273  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.141507  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:02:20.143177  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Running err=<nil>
	W1128 04:02:20.143200  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:02:20.144930  388252 out.go:177] * Updating the running kvm2 "embed-certs-672176" VM ...
	I1128 04:02:17.125019  385277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:17.142364  385277 api_server.go:72] duration metric: took 4m14.849353437s to wait for apiserver process to appear ...
	I1128 04:02:17.142392  385277 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:17.142425  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:17.142480  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:17.183951  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:17.183975  385277 cri.go:89] found id: ""
	I1128 04:02:17.183984  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:17.184035  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.188897  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:17.188968  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:17.224077  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:17.224105  385277 cri.go:89] found id: ""
	I1128 04:02:17.224115  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:17.224171  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.228613  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:17.228693  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:17.263866  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:17.263895  385277 cri.go:89] found id: ""
	I1128 04:02:17.263906  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:17.263973  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.268122  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:17.268187  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:17.311145  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.311176  385277 cri.go:89] found id: ""
	I1128 04:02:17.311185  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:17.311245  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.315277  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:17.315355  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:17.352737  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:17.352763  385277 cri.go:89] found id: ""
	I1128 04:02:17.352773  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:17.352839  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.357033  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:17.357117  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:17.394844  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:17.394880  385277 cri.go:89] found id: ""
	I1128 04:02:17.394892  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:17.394949  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.399309  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:17.399382  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:17.441719  385277 cri.go:89] found id: ""
	I1128 04:02:17.441755  385277 logs.go:284] 0 containers: []
	W1128 04:02:17.441763  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:17.441769  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:17.441821  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:17.485353  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:17.485378  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:17.485383  385277 cri.go:89] found id: ""
	I1128 04:02:17.485391  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:17.485445  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.489781  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.493710  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:17.493734  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:17.552558  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:17.552596  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:17.570454  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:17.570484  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.617817  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:17.617855  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:18.071032  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:18.071076  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:18.188437  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:18.188477  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:18.246729  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:18.246777  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:18.287299  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:18.287345  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:18.324855  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:18.324903  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:18.378328  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:18.378370  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:18.421332  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:18.421375  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:18.467856  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:18.467905  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:18.528763  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:18.528817  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:19.035039  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:21.037085  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:23.535684  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:20.146477  388252 machine.go:88] provisioning docker machine ...
	I1128 04:02:20.146512  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.146758  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.146926  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:02:20.146949  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.147164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:02:20.150346  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.150885  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 04:58:10 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:02:20.150920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.151194  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:02:20.151404  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151768  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:02:20.151998  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:02:20.152482  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:02:20.152501  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:02:23.005224  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:21.087291  385277 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8444/healthz ...
	I1128 04:02:21.094451  385277 api_server.go:279] https://192.168.61.13:8444/healthz returned 200:
	ok
	I1128 04:02:21.096308  385277 api_server.go:141] control plane version: v1.28.4
	I1128 04:02:21.096333  385277 api_server.go:131] duration metric: took 3.953933505s to wait for apiserver health ...
	I1128 04:02:21.096343  385277 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:21.096371  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:21.096431  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:21.144869  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:21.144908  385277 cri.go:89] found id: ""
	I1128 04:02:21.144920  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:21.144987  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.149714  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:21.149790  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:21.192196  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.192230  385277 cri.go:89] found id: ""
	I1128 04:02:21.192242  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:21.192307  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.196964  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:21.197040  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:21.234749  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:21.234775  385277 cri.go:89] found id: ""
	I1128 04:02:21.234785  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:21.234845  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.239486  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:21.239574  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:21.275950  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.275980  385277 cri.go:89] found id: ""
	I1128 04:02:21.275991  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:21.276069  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.280518  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:21.280591  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:21.325941  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:21.325967  385277 cri.go:89] found id: ""
	I1128 04:02:21.325977  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:21.326038  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.330959  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:21.331031  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:21.376605  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.376636  385277 cri.go:89] found id: ""
	I1128 04:02:21.376648  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:21.376717  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.382609  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:21.382686  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:21.434065  385277 cri.go:89] found id: ""
	I1128 04:02:21.434102  385277 logs.go:284] 0 containers: []
	W1128 04:02:21.434113  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:21.434121  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:21.434191  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:21.475230  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.475265  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.475272  385277 cri.go:89] found id: ""
	I1128 04:02:21.475300  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:21.475367  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.479918  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.483989  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:21.484014  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.550040  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:21.550086  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.604802  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:21.604854  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:21.667187  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:21.667230  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:21.735542  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:21.735591  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.778554  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:21.778600  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.841737  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:21.841776  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.885454  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:21.885494  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:22.264498  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:22.264545  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:22.281694  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:22.281727  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:22.441500  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:22.441548  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:22.516971  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:22.517015  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:22.570642  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:22.570676  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:25.123556  385277 system_pods.go:59] 8 kube-system pods found
	I1128 04:02:25.123590  385277 system_pods.go:61] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.123595  385277 system_pods.go:61] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.123600  385277 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.123604  385277 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.123608  385277 system_pods.go:61] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.123613  385277 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.123620  385277 system_pods.go:61] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.123626  385277 system_pods.go:61] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.123633  385277 system_pods.go:74] duration metric: took 4.027284696s to wait for pod list to return data ...
	I1128 04:02:25.123641  385277 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:25.127575  385277 default_sa.go:45] found service account: "default"
	I1128 04:02:25.127601  385277 default_sa.go:55] duration metric: took 3.954108ms for default service account to be created ...
	I1128 04:02:25.127611  385277 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:25.136183  385277 system_pods.go:86] 8 kube-system pods found
	I1128 04:02:25.136217  385277 system_pods.go:89] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.136224  385277 system_pods.go:89] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.136232  385277 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.136240  385277 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.136246  385277 system_pods.go:89] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.136253  385277 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.136266  385277 system_pods.go:89] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.136280  385277 system_pods.go:89] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.136291  385277 system_pods.go:126] duration metric: took 8.673655ms to wait for k8s-apps to be running ...
	I1128 04:02:25.136303  385277 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:25.136362  385277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:25.158811  385277 system_svc.go:56] duration metric: took 22.495299ms WaitForService to wait for kubelet.
	I1128 04:02:25.158862  385277 kubeadm.go:581] duration metric: took 4m22.865858856s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:25.158891  385277 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:25.162679  385277 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:25.162706  385277 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:25.162717  385277 node_conditions.go:105] duration metric: took 3.821419ms to run NodePressure ...
	I1128 04:02:25.162745  385277 start.go:228] waiting for startup goroutines ...
	I1128 04:02:25.162751  385277 start.go:233] waiting for cluster config update ...
	I1128 04:02:25.162760  385277 start.go:242] writing updated cluster config ...
	I1128 04:02:25.163075  385277 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:25.217545  385277 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:02:25.219820  385277 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-725962" cluster and "default" namespace by default
	I1128 04:02:28.624093  385190 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.0
	I1128 04:02:28.624173  385190 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:02:28.624301  385190 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:02:28.624444  385190 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:02:28.624561  385190 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:02:28.624641  385190 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:02:28.626365  385190 out.go:204]   - Generating certificates and keys ...
	I1128 04:02:28.626465  385190 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:02:28.626548  385190 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:02:28.626645  385190 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:02:28.626719  385190 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:02:28.626828  385190 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:02:28.626908  385190 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:02:28.626985  385190 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:02:28.627057  385190 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:02:28.627166  385190 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:02:28.627259  385190 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:02:28.627315  385190 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:02:28.627384  385190 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:02:28.627442  385190 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:02:28.627513  385190 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1128 04:02:28.627573  385190 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:02:28.627653  385190 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:02:28.627717  385190 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:02:28.627821  385190 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:02:28.627901  385190 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:02:28.629387  385190 out.go:204]   - Booting up control plane ...
	I1128 04:02:28.629496  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:02:28.629593  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:02:28.629701  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:02:28.629825  385190 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:02:28.629933  385190 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:02:28.629985  385190 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:02:28.630182  385190 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:02:28.630292  385190 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502940 seconds
	I1128 04:02:28.630437  385190 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:02:28.630586  385190 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:02:28.630656  385190 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:02:28.630869  385190 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-222348 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:02:28.630937  385190 kubeadm.go:322] [bootstrap-token] Using token: 7e8qc3.nnytwd8q8fl84l6i
	I1128 04:02:28.632838  385190 out.go:204]   - Configuring RBAC rules ...
	I1128 04:02:28.632987  385190 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:02:28.633108  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:02:28.633273  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:02:28.633455  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:02:28.633635  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:02:28.633737  385190 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:02:28.633909  385190 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:02:28.633964  385190 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:02:28.634003  385190 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:02:28.634009  385190 kubeadm.go:322] 
	I1128 04:02:28.634063  385190 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:02:28.634070  385190 kubeadm.go:322] 
	I1128 04:02:28.634130  385190 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:02:28.634136  385190 kubeadm.go:322] 
	I1128 04:02:28.634157  385190 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:02:28.634205  385190 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:02:28.634250  385190 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:02:28.634256  385190 kubeadm.go:322] 
	I1128 04:02:28.634333  385190 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:02:28.634349  385190 kubeadm.go:322] 
	I1128 04:02:28.634438  385190 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:02:28.634462  385190 kubeadm.go:322] 
	I1128 04:02:28.634525  385190 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:02:28.634659  385190 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:02:28.634759  385190 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:02:28.634773  385190 kubeadm.go:322] 
	I1128 04:02:28.634879  385190 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:02:28.634957  385190 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:02:28.634965  385190 kubeadm.go:322] 
	I1128 04:02:28.635041  385190 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635153  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:02:28.635188  385190 kubeadm.go:322] 	--control-plane 
	I1128 04:02:28.635197  385190 kubeadm.go:322] 
	I1128 04:02:28.635304  385190 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:02:28.635313  385190 kubeadm.go:322] 
	I1128 04:02:28.635411  385190 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635541  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:02:28.635574  385190 cni.go:84] Creating CNI manager for ""
	I1128 04:02:28.635588  385190 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:28.637435  385190 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:02:28.638928  385190 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:02:25.536491  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:28.037478  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:26.077199  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:28.654704  385190 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:02:28.714435  385190 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:02:28.714516  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.714524  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=no-preload-222348 minikube.k8s.io/updated_at=2023_11_28T04_02_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.790761  385190 ops.go:34] apiserver oom_adj: -16
	I1128 04:02:28.965788  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.082351  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.680586  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.181037  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.680560  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.181252  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.680411  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.180401  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.681195  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:33.180867  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.535026  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.536808  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.161184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:33.680538  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.180615  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.680359  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.180746  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.681099  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.180588  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.681059  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.180397  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.680629  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:38.180710  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.036694  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:37.535611  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:35.229145  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:38.681268  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.180491  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.680634  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.180761  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.681057  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.180983  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.309439  385190 kubeadm.go:1081] duration metric: took 12.594981015s to wait for elevateKubeSystemPrivileges.
	I1128 04:02:41.309479  385190 kubeadm.go:406] StartCluster complete in 5m13.943228432s
	I1128 04:02:41.309503  385190 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.309588  385190 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:41.311897  385190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.312215  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:02:41.312322  385190 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:02:41.312407  385190 addons.go:69] Setting storage-provisioner=true in profile "no-preload-222348"
	I1128 04:02:41.312422  385190 addons.go:69] Setting default-storageclass=true in profile "no-preload-222348"
	I1128 04:02:41.312436  385190 addons.go:231] Setting addon storage-provisioner=true in "no-preload-222348"
	I1128 04:02:41.312438  385190 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-222348"
	W1128 04:02:41.312445  385190 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:02:41.312446  385190 addons.go:69] Setting metrics-server=true in profile "no-preload-222348"
	I1128 04:02:41.312462  385190 config.go:182] Loaded profile config "no-preload-222348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 04:02:41.312475  385190 addons.go:231] Setting addon metrics-server=true in "no-preload-222348"
	W1128 04:02:41.312485  385190 addons.go:240] addon metrics-server should already be in state true
	I1128 04:02:41.312510  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312537  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312960  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312985  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.328695  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I1128 04:02:41.328709  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44013
	I1128 04:02:41.328795  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I1128 04:02:41.332632  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332652  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332640  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.333191  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333213  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333323  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333340  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333358  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333344  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333610  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333774  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333826  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.334168  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334182  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.334399  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.334587  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334602  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.338095  385190 addons.go:231] Setting addon default-storageclass=true in "no-preload-222348"
	W1128 04:02:41.338117  385190 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:02:41.338150  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.338562  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.338582  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.351757  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I1128 04:02:41.352462  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.353001  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.353018  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.353432  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.353689  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.354246  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1128 04:02:41.354837  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.355324  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.355342  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.355772  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.356535  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.356577  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.356832  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33321
	I1128 04:02:41.357390  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.357499  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.359297  385190 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:02:41.357865  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.360511  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.360704  385190 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.360715  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:02:41.360729  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.361075  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.361268  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.363830  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.365783  385190 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:02:41.364607  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.365384  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.367315  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:02:41.367328  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:02:41.367348  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.367398  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.367414  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.367426  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.368068  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.368272  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.370196  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370716  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.370740  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370820  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.371038  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.371144  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.371280  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.374445  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1128 04:02:41.374734  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.375079  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.375089  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.375305  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.375403  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.376672  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.376916  385190 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.376931  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:02:41.376944  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.379448  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379800  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.379839  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379946  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.380070  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.380154  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.380223  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.388696  385190 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-222348" context rescaled to 1 replicas
	I1128 04:02:41.388733  385190 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:02:41.390613  385190 out.go:177] * Verifying Kubernetes components...
	I1128 04:02:41.391975  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:41.644941  385190 node_ready.go:35] waiting up to 6m0s for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.645100  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:02:41.665031  385190 node_ready.go:49] node "no-preload-222348" has status "Ready":"True"
	I1128 04:02:41.665067  385190 node_ready.go:38] duration metric: took 20.088639ms waiting for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.665082  385190 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:41.682673  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:41.759560  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:02:41.759595  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:02:41.905887  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.922496  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.955296  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:02:41.955331  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:02:42.013986  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.014023  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:02:42.104936  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.373507  385190 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 04:02:43.023075  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117131952s)
	I1128 04:02:43.023099  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.100573063s)
	I1128 04:02:43.023137  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023153  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023217  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023235  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023471  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023491  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023502  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023510  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023615  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023659  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023682  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023704  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023724  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023704  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023898  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023917  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.116124  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.116162  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.116591  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.116636  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.116648  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.309617  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.204630924s)
	I1128 04:02:43.309676  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.309689  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310010  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310031  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310043  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.310051  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310313  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310331  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310343  385190 addons.go:467] Verifying addon metrics-server=true in "no-preload-222348"
	I1128 04:02:43.312005  385190 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 04:02:43.313519  385190 addons.go:502] enable addons completed in 2.001198411s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 04:02:39.536572  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:42.036107  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:41.309196  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:44.385117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:43.735794  385190 pod_ready.go:102] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:45.228427  385190 pod_ready.go:92] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.228457  385190 pod_ready.go:81] duration metric: took 3.545740844s waiting for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.228470  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234714  385190 pod_ready.go:92] pod "coredns-76f75df574-nxnkf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.234747  385190 pod_ready.go:81] duration metric: took 6.268663ms waiting for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234767  385190 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240363  385190 pod_ready.go:92] pod "etcd-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.240386  385190 pod_ready.go:81] duration metric: took 5.606452ms waiting for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240397  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245748  385190 pod_ready.go:92] pod "kube-apiserver-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.245774  385190 pod_ready.go:81] duration metric: took 5.367922ms waiting for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245786  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251475  385190 pod_ready.go:92] pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.251498  385190 pod_ready.go:81] duration metric: took 5.703821ms waiting for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251506  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050247  385190 pod_ready.go:92] pod "kube-proxy-2cf7h" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.050276  385190 pod_ready.go:81] duration metric: took 798.763018ms waiting for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050285  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448834  385190 pod_ready.go:92] pod "kube-scheduler-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.448860  385190 pod_ready.go:81] duration metric: took 398.568611ms waiting for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448867  385190 pod_ready.go:38] duration metric: took 4.783773086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:46.448903  385190 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:02:46.448956  385190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:46.462941  385190 api_server.go:72] duration metric: took 5.074163925s to wait for apiserver process to appear ...
	I1128 04:02:46.463051  385190 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:46.463074  385190 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I1128 04:02:46.467657  385190 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I1128 04:02:46.468866  385190 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 04:02:46.468903  385190 api_server.go:131] duration metric: took 5.843376ms to wait for apiserver health ...
	I1128 04:02:46.468913  385190 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:46.655554  385190 system_pods.go:59] 9 kube-system pods found
	I1128 04:02:46.655587  385190 system_pods.go:61] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:46.655591  385190 system_pods.go:61] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:46.655595  385190 system_pods.go:61] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:46.655600  385190 system_pods.go:61] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:46.655605  385190 system_pods.go:61] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:46.655608  385190 system_pods.go:61] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:46.655612  385190 system_pods.go:61] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:46.655619  385190 system_pods.go:61] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:46.655623  385190 system_pods.go:61] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:46.655631  385190 system_pods.go:74] duration metric: took 186.709524ms to wait for pod list to return data ...
	I1128 04:02:46.655640  385190 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:46.849175  385190 default_sa.go:45] found service account: "default"
	I1128 04:02:46.849211  385190 default_sa.go:55] duration metric: took 193.561736ms for default service account to be created ...
	I1128 04:02:46.849224  385190 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:47.053165  385190 system_pods.go:86] 9 kube-system pods found
	I1128 04:02:47.053196  385190 system_pods.go:89] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:47.053202  385190 system_pods.go:89] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:47.053206  385190 system_pods.go:89] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:47.053210  385190 system_pods.go:89] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:47.053215  385190 system_pods.go:89] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:47.053219  385190 system_pods.go:89] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:47.053223  385190 system_pods.go:89] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:47.053230  385190 system_pods.go:89] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:47.053234  385190 system_pods.go:89] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:47.053244  385190 system_pods.go:126] duration metric: took 204.014035ms to wait for k8s-apps to be running ...
	I1128 04:02:47.053258  385190 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:47.053305  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:47.067411  385190 system_svc.go:56] duration metric: took 14.14274ms WaitForService to wait for kubelet.
	I1128 04:02:47.067436  385190 kubeadm.go:581] duration metric: took 5.678670521s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:47.067453  385190 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:47.249281  385190 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:47.249314  385190 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:47.249327  385190 node_conditions.go:105] duration metric: took 181.869484ms to run NodePressure ...
	I1128 04:02:47.249343  385190 start.go:228] waiting for startup goroutines ...
	I1128 04:02:47.249351  385190 start.go:233] waiting for cluster config update ...
	I1128 04:02:47.249363  385190 start.go:242] writing updated cluster config ...
	I1128 04:02:47.249683  385190 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:47.301859  385190 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.0 (minor skew: 1)
	I1128 04:02:47.304215  385190 out.go:177] * Done! kubectl is now configured to use "no-preload-222348" cluster and "default" namespace by default
	I1128 04:02:44.036258  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:46.535320  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:49.035723  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:51.036414  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.538606  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.501130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:56.038018  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:58.038148  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:56.573082  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:00.535454  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.536429  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.657139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:05.035677  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:07.535352  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:05.725166  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:10.035343  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:11.229133  384793 pod_ready.go:81] duration metric: took 4m0.000747713s waiting for pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:11.229186  384793 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 04:03:11.229223  384793 pod_ready.go:38] duration metric: took 4m1.198355321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:11.229295  384793 kubeadm.go:640] restartCluster took 5m7.227749733s
	W1128 04:03:11.229381  384793 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 04:03:11.229418  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 04:03:11.809110  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:14.877214  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:17.718633  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.489183339s)
	I1128 04:03:17.718715  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:17.739229  384793 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:03:17.757193  384793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:03:17.767831  384793 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:03:17.767891  384793 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1128 04:03:17.992007  384793 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:03:20.961191  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:24.033147  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:31.044187  384793 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1128 04:03:31.044276  384793 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:03:31.044375  384793 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:03:31.044493  384793 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:03:31.044609  384793 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:03:31.044732  384793 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:03:31.044843  384793 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:03:31.044947  384793 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1128 04:03:31.045000  384793 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:03:31.046699  384793 out.go:204]   - Generating certificates and keys ...
	I1128 04:03:31.046809  384793 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:03:31.046903  384793 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:03:31.047016  384793 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:03:31.047101  384793 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:03:31.047160  384793 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:03:31.047208  384793 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:03:31.047264  384793 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:03:31.047314  384793 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:03:31.047377  384793 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:03:31.047482  384793 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:03:31.047529  384793 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:03:31.047578  384793 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:03:31.047620  384793 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:03:31.047694  384793 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:03:31.047788  384793 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:03:31.047884  384793 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:03:31.047988  384793 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:03:31.049345  384793 out.go:204]   - Booting up control plane ...
	I1128 04:03:31.049473  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:03:31.049569  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:03:31.049662  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:03:31.049788  384793 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:03:31.049994  384793 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:03:31.050107  384793 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503287 seconds
	I1128 04:03:31.050234  384793 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:03:31.050420  384793 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:03:31.050527  384793 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:03:31.050654  384793 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-666657 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1128 04:03:31.050713  384793 kubeadm.go:322] [bootstrap-token] Using token: gf7r1p.pbcguwte29lkqg9w
	I1128 04:03:31.052000  384793 out.go:204]   - Configuring RBAC rules ...
	I1128 04:03:31.052092  384793 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:03:31.052210  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:03:31.052320  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:03:31.052413  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:03:31.052483  384793 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:03:31.052536  384793 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:03:31.052597  384793 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:03:31.052606  384793 kubeadm.go:322] 
	I1128 04:03:31.052674  384793 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:03:31.052686  384793 kubeadm.go:322] 
	I1128 04:03:31.052781  384793 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:03:31.052797  384793 kubeadm.go:322] 
	I1128 04:03:31.052818  384793 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:03:31.052928  384793 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:03:31.052973  384793 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:03:31.052982  384793 kubeadm.go:322] 
	I1128 04:03:31.053023  384793 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:03:31.053088  384793 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:03:31.053143  384793 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:03:31.053150  384793 kubeadm.go:322] 
	I1128 04:03:31.053220  384793 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1128 04:03:31.053286  384793 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:03:31.053292  384793 kubeadm.go:322] 
	I1128 04:03:31.053381  384793 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053534  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:03:31.053573  384793 kubeadm.go:322]     --control-plane 	  
	I1128 04:03:31.053582  384793 kubeadm.go:322] 
	I1128 04:03:31.053693  384793 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:03:31.053705  384793 kubeadm.go:322] 
	I1128 04:03:31.053806  384793 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053946  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:03:31.053966  384793 cni.go:84] Creating CNI manager for ""
	I1128 04:03:31.053976  384793 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:03:31.055505  384793 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:03:31.057142  384793 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:03:31.079411  384793 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:03:31.115893  384793 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:03:31.115971  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.115980  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=old-k8s-version-666657 minikube.k8s.io/updated_at=2023_11_28T04_03_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.155887  384793 ops.go:34] apiserver oom_adj: -16
	I1128 04:03:31.372659  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.491129  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.099198  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.598840  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.099309  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.599526  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:30.109176  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:33.181170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:34.099192  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:34.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.098837  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.599080  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.098595  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.599209  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.099078  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.599225  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.099115  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.599148  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.261149  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:39.099036  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.599363  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.099099  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.598700  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.099170  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.599370  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.099044  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.098743  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.599233  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.333168  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:44.099079  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:44.598797  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.098959  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.598648  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.098995  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.301569  384793 kubeadm.go:1081] duration metric: took 15.185662789s to wait for elevateKubeSystemPrivileges.
	I1128 04:03:46.301619  384793 kubeadm.go:406] StartCluster complete in 5m42.369662329s
	I1128 04:03:46.301646  384793 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.301755  384793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:03:46.304463  384793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.304778  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:03:46.304778  384793 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:03:46.304867  384793 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304898  384793 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304910  384793 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-666657"
	I1128 04:03:46.304911  384793 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-666657"
	W1128 04:03:46.304920  384793 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:03:46.304927  384793 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-666657"
	W1128 04:03:46.304935  384793 addons.go:240] addon metrics-server should already be in state true
	I1128 04:03:46.304934  384793 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-666657"
	I1128 04:03:46.304987  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.304988  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.305001  384793 config.go:182] Loaded profile config "old-k8s-version-666657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 04:03:46.305394  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305427  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305454  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305429  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305395  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305694  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.322961  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33891
	I1128 04:03:46.322979  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I1128 04:03:46.323376  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323388  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323820  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I1128 04:03:46.323904  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.323916  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324071  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324086  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324273  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.324410  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324528  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324590  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.324704  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324711  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.325059  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.325278  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325304  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.325499  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325519  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.328349  384793 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-666657"
	W1128 04:03:46.328365  384793 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:03:46.328393  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.328731  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.328750  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.342280  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45973
	I1128 04:03:46.343025  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.343737  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.343759  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.344269  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.344492  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.345036  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39033
	I1128 04:03:46.345665  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.346273  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.346301  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.346384  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.348493  384793 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:03:46.346866  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.349948  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:03:46.349966  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:03:46.349989  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.350099  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.352330  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.352432  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36429
	I1128 04:03:46.354071  384793 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:03:46.352959  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.354459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355328  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.355358  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355480  384793 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.355501  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:03:46.355518  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.355216  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.355803  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.356414  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.356435  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.356917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.357018  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.357108  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.357738  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.357769  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.358467  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.358922  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.358946  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.359072  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.359282  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.359403  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.359610  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.373628  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1128 04:03:46.374105  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.374866  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.374895  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.375314  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.375548  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.377265  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.377561  384793 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.377582  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:03:46.377603  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.380459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.380834  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.380864  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.381016  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.381169  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.381359  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.381466  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.409792  384793 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-666657" context rescaled to 1 replicas
	I1128 04:03:46.409842  384793 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:03:46.411454  384793 out.go:177] * Verifying Kubernetes components...
	I1128 04:03:46.413194  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:46.586767  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.631269  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.634383  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:03:46.634407  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:03:46.666152  384793 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.666176  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:03:46.674225  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:03:46.674248  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:03:46.713431  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:46.713461  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:03:46.793657  384793 node_ready.go:49] node "old-k8s-version-666657" has status "Ready":"True"
	I1128 04:03:46.793685  384793 node_ready.go:38] duration metric: took 127.497314ms waiting for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.793695  384793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:46.793699  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:47.263395  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:47.404099  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404139  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404445  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404485  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.404487  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:47.404506  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404519  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404786  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404809  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434537  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.434567  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.434929  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.434986  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434965  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.447368  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.816042626s)
	I1128 04:03:48.447386  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.781104735s)
	I1128 04:03:48.447415  384793 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1128 04:03:48.447423  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.447803  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.447818  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.447828  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447836  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.448143  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.448144  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.448166  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.746828  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.953085214s)
	I1128 04:03:48.746898  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.746917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747352  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.747378  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.747396  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.747420  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.747437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747692  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.749007  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.749027  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.749045  384793 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-666657"
	I1128 04:03:48.750820  384793 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 04:03:48.417150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:48.752378  384793 addons.go:502] enable addons completed in 2.447603022s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 04:03:49.504435  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.973968  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.485111  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:53.973462  384793 pod_ready.go:92] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.973491  384793 pod_ready.go:81] duration metric: took 6.710064476s waiting for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.973504  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.975383  384793 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975413  384793 pod_ready.go:81] duration metric: took 1.901164ms waiting for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:53.975426  384793 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975437  384793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980213  384793 pod_ready.go:92] pod "kube-proxy-fpjnf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.980239  384793 pod_ready.go:81] duration metric: took 4.79365ms waiting for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980249  384793 pod_ready.go:38] duration metric: took 7.186544585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:53.980270  384793 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:03:53.980322  384793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:03:53.995392  384793 api_server.go:72] duration metric: took 7.585507425s to wait for apiserver process to appear ...
	I1128 04:03:53.995438  384793 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:03:53.995455  384793 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I1128 04:03:54.002840  384793 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I1128 04:03:54.003953  384793 api_server.go:141] control plane version: v1.16.0
	I1128 04:03:54.003972  384793 api_server.go:131] duration metric: took 8.527968ms to wait for apiserver health ...
	I1128 04:03:54.003980  384793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:03:54.008155  384793 system_pods.go:59] 4 kube-system pods found
	I1128 04:03:54.008179  384793 system_pods.go:61] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.008184  384793 system_pods.go:61] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.008192  384793 system_pods.go:61] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.008196  384793 system_pods.go:61] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.008202  384793 system_pods.go:74] duration metric: took 4.21636ms to wait for pod list to return data ...
	I1128 04:03:54.008209  384793 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:03:54.010577  384793 default_sa.go:45] found service account: "default"
	I1128 04:03:54.010597  384793 default_sa.go:55] duration metric: took 2.383201ms for default service account to be created ...
	I1128 04:03:54.010603  384793 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:03:54.014085  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.014107  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.014114  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.014121  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.014125  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.014142  384793 retry.go:31] will retry after 305.81254ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.325645  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.325690  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.325700  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.325711  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.325717  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.325737  384793 retry.go:31] will retry after 265.004483ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.596427  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.596465  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.596472  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.596483  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.596491  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.596515  384793 retry.go:31] will retry after 379.763313ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.981569  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.981599  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.981607  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.981617  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.981624  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.981646  384793 retry.go:31] will retry after 439.396023ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.426531  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.426560  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.426565  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.426572  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.426577  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.426593  384793 retry.go:31] will retry after 551.563469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.983013  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.983042  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.983048  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.983055  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.983060  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.983076  384793 retry.go:31] will retry after 647.414701ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:56.635207  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:56.635238  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:56.635243  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:56.635251  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:56.635256  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:56.635276  384793 retry.go:31] will retry after 1.037316769s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.678748  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:57.678791  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:57.678800  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:57.678810  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:57.678815  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:57.678836  384793 retry.go:31] will retry after 1.167348672s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.565155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:58.851584  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:58.851615  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:58.851621  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:58.851627  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:58.851632  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:58.851649  384793 retry.go:31] will retry after 1.37796567s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.235244  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:00.235270  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:00.235276  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:00.235282  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:00.235288  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:00.235313  384793 retry.go:31] will retry after 2.090359712s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:02.330947  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:02.330984  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:02.331002  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:02.331013  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:02.331020  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:02.331041  384793 retry.go:31] will retry after 2.451255186s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.637193  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:04.787969  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:04.787999  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:04.788004  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:04.788011  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:04.788016  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:04.788033  384793 retry.go:31] will retry after 2.859833817s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:07.653629  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:07.653661  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:07.653667  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:07.653674  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:07.653679  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:07.653697  384793 retry.go:31] will retry after 4.226694897s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:06.721130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:09.789162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:11.886456  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:11.886488  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:11.886496  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:11.886503  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:11.886508  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:11.886538  384793 retry.go:31] will retry after 4.177038986s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:16.069291  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:16.069324  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:16.069330  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:16.069336  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:16.069341  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:16.069359  384793 retry.go:31] will retry after 4.273733761s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:15.869195  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:18.945228  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:20.347960  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:20.347992  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:20.347998  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:20.348004  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:20.348009  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:20.348028  384793 retry.go:31] will retry after 6.790786839s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:27.147442  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:27.147481  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:27.147489  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:27.147496  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Pending
	I1128 04:04:27.147506  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:27.147513  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:27.147532  384793 retry.go:31] will retry after 7.530763623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:25.021154  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:28.093157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.177177  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.684745  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:34.684783  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:34.684792  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:34.684799  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:34.684807  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:34.684813  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:34.684835  384793 retry.go:31] will retry after 10.243202989s: missing components: etcd, kube-apiserver, kube-controller-manager
	I1128 04:04:37.245170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:43.325131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:44.935423  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:04:44.935456  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:44.935462  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:04:44.935469  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Pending
	I1128 04:04:44.935474  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Pending
	I1128 04:04:44.935480  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:44.935486  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:44.935493  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:44.935498  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:44.935517  384793 retry.go:31] will retry after 15.895769684s: missing components: kube-apiserver, kube-controller-manager
	I1128 04:04:46.397235  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:52.481117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:55.549226  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:00.839171  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:05:00.839203  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:05:00.839209  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:05:00.839213  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Running
	I1128 04:05:00.839217  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Running
	I1128 04:05:00.839221  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:05:00.839225  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:05:00.839231  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:05:00.839236  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:05:00.839245  384793 system_pods.go:126] duration metric: took 1m6.828635432s to wait for k8s-apps to be running ...
	I1128 04:05:00.839253  384793 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:05:00.839308  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:05:00.858602  384793 system_svc.go:56] duration metric: took 19.336447ms WaitForService to wait for kubelet.
	I1128 04:05:00.858640  384793 kubeadm.go:581] duration metric: took 1m14.448764188s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:05:00.858663  384793 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:05:00.862657  384793 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:05:00.862682  384793 node_conditions.go:123] node cpu capacity is 2
	I1128 04:05:00.862695  384793 node_conditions.go:105] duration metric: took 4.026622ms to run NodePressure ...
	I1128 04:05:00.862709  384793 start.go:228] waiting for startup goroutines ...
	I1128 04:05:00.862721  384793 start.go:233] waiting for cluster config update ...
	I1128 04:05:00.862736  384793 start.go:242] writing updated cluster config ...
	I1128 04:05:00.863037  384793 ssh_runner.go:195] Run: rm -f paused
	I1128 04:05:00.914674  384793 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1128 04:05:00.916795  384793 out.go:177] 
	W1128 04:05:00.918292  384793 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1128 04:05:00.919711  384793 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1128 04:05:00.921263  384793 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-666657" cluster and "default" namespace by default
	I1128 04:05:01.629125  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:04.701205  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:10.781216  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:13.853213  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:19.933127  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:23.005456  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:29.085157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:32.161103  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:38.237107  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:41.313150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:47.389244  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:50.461131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:56.541162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:59.613200  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:05.693144  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:08.765184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:14.845161  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:17.921139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:23.997190  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:27.069225  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:33.149188  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:36.221163  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:42.301167  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:45.373156  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:51.453155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:54.525189  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:57.526358  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:06:57.526408  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:06:57.528448  388252 machine.go:91] provisioned docker machine in 4m37.381939051s
	I1128 04:06:57.528492  388252 fix.go:56] fixHost completed within 4m37.404595738s
	I1128 04:06:57.528498  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 4m37.404645524s
	W1128 04:06:57.528514  388252 start.go:691] error starting host: provision: host is not running
	W1128 04:06:57.528751  388252 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1128 04:06:57.528762  388252 start.go:706] Will try again in 5 seconds ...
	I1128 04:07:02.528995  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:07:02.529144  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 79.815µs
	I1128 04:07:02.529172  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:07:02.529180  388252 fix.go:54] fixHost starting: 
	I1128 04:07:02.529654  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:07:02.529689  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:07:02.545443  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I1128 04:07:02.546041  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:07:02.546627  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:07:02.546657  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:07:02.547002  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:07:02.547202  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:02.547393  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:07:02.549209  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Stopped err=<nil>
	I1128 04:07:02.549234  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	W1128 04:07:02.549378  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:07:02.551250  388252 out.go:177] * Restarting existing kvm2 VM for "embed-certs-672176" ...
	I1128 04:07:02.552611  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Start
	I1128 04:07:02.552792  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring networks are active...
	I1128 04:07:02.553615  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network default is active
	I1128 04:07:02.553928  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network mk-embed-certs-672176 is active
	I1128 04:07:02.554371  388252 main.go:141] libmachine: (embed-certs-672176) Getting domain xml...
	I1128 04:07:02.555218  388252 main.go:141] libmachine: (embed-certs-672176) Creating domain...
	I1128 04:07:03.867073  388252 main.go:141] libmachine: (embed-certs-672176) Waiting to get IP...
	I1128 04:07:03.868115  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:03.868595  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:03.868706  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:03.868567  389161 retry.go:31] will retry after 306.367802ms: waiting for machine to come up
	I1128 04:07:04.176148  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.176727  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.176760  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.176665  389161 retry.go:31] will retry after 349.820346ms: waiting for machine to come up
	I1128 04:07:04.528319  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.528804  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.528830  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.528753  389161 retry.go:31] will retry after 434.816613ms: waiting for machine to come up
	I1128 04:07:04.965453  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.965931  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.965964  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.965859  389161 retry.go:31] will retry after 504.812349ms: waiting for machine to come up
	I1128 04:07:05.472644  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.473150  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.473181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.473089  389161 retry.go:31] will retry after 512.859795ms: waiting for machine to come up
	I1128 04:07:05.987622  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.988077  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.988101  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.988023  389161 retry.go:31] will retry after 578.673806ms: waiting for machine to come up
	I1128 04:07:06.568420  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:06.568923  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:06.568957  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:06.568863  389161 retry.go:31] will retry after 1.101477644s: waiting for machine to come up
	I1128 04:07:07.671698  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:07.672126  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:07.672156  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:07.672054  389161 retry.go:31] will retry after 1.379684082s: waiting for machine to come up
	I1128 04:07:09.053227  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:09.053918  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:09.053950  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:09.053851  389161 retry.go:31] will retry after 1.775284772s: waiting for machine to come up
	I1128 04:07:10.831571  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:10.832140  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:10.832177  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:10.832065  389161 retry.go:31] will retry after 2.005203426s: waiting for machine to come up
	I1128 04:07:12.838667  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:12.839159  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:12.839187  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:12.839113  389161 retry.go:31] will retry after 2.403192486s: waiting for machine to come up
	I1128 04:07:15.244005  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:15.244513  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:15.244553  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:15.244427  389161 retry.go:31] will retry after 2.329820043s: waiting for machine to come up
	I1128 04:07:17.576268  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:17.576707  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:17.576748  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:17.576652  389161 retry.go:31] will retry after 4.220303586s: waiting for machine to come up
	I1128 04:07:21.801976  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802441  388252 main.go:141] libmachine: (embed-certs-672176) Found IP for machine: 192.168.72.208
	I1128 04:07:21.802469  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has current primary IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802483  388252 main.go:141] libmachine: (embed-certs-672176) Reserving static IP address...
	I1128 04:07:21.802890  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.802920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | skip adding static IP to network mk-embed-certs-672176 - found existing host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"}
	I1128 04:07:21.802939  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Getting to WaitForSSH function...
	I1128 04:07:21.802955  388252 main.go:141] libmachine: (embed-certs-672176) Reserved static IP address: 192.168.72.208
	I1128 04:07:21.802967  388252 main.go:141] libmachine: (embed-certs-672176) Waiting for SSH to be available...
	I1128 04:07:21.805675  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806052  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.806086  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806212  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH client type: external
	I1128 04:07:21.806237  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH private key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa (-rw-------)
	I1128 04:07:21.806261  388252 main.go:141] libmachine: (embed-certs-672176) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 04:07:21.806272  388252 main.go:141] libmachine: (embed-certs-672176) DBG | About to run SSH command:
	I1128 04:07:21.806284  388252 main.go:141] libmachine: (embed-certs-672176) DBG | exit 0
	I1128 04:07:21.897047  388252 main.go:141] libmachine: (embed-certs-672176) DBG | SSH cmd err, output: <nil>: 
	I1128 04:07:21.897443  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetConfigRaw
	I1128 04:07:21.898164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:21.901014  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901421  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.901454  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901679  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:07:21.901872  388252 machine.go:88] provisioning docker machine ...
	I1128 04:07:21.901891  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:21.902121  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902304  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:07:21.902318  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902482  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:21.905282  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905757  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.905798  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905977  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:21.906187  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906383  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906565  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:21.906734  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:21.907224  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:21.907254  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:07:22.042525  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-672176
	
	I1128 04:07:22.042553  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.045516  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.045916  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.045961  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.046143  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.046353  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046526  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046676  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.046861  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.047186  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.047207  388252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-672176' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-672176/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-672176' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:07:22.179515  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:07:22.179552  388252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 04:07:22.179578  388252 buildroot.go:174] setting up certificates
	I1128 04:07:22.179591  388252 provision.go:83] configureAuth start
	I1128 04:07:22.179602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:22.179940  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:22.182782  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183167  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.183199  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183344  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.185770  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186158  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.186195  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186348  388252 provision.go:138] copyHostCerts
	I1128 04:07:22.186407  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 04:07:22.186418  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 04:07:22.186494  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 04:07:22.186609  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 04:07:22.186623  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 04:07:22.186658  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 04:07:22.186756  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 04:07:22.186772  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 04:07:22.186830  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 04:07:22.186915  388252 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.embed-certs-672176 san=[192.168.72.208 192.168.72.208 localhost 127.0.0.1 minikube embed-certs-672176]
	I1128 04:07:22.268178  388252 provision.go:172] copyRemoteCerts
	I1128 04:07:22.268250  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:07:22.268305  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.270816  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271152  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.271181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271382  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.271571  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.271730  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.271880  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.362340  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 04:07:22.387591  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 04:07:22.412169  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 04:07:22.437185  388252 provision.go:86] duration metric: configureAuth took 257.574597ms
	I1128 04:07:22.437223  388252 buildroot.go:189] setting minikube options for container-runtime
	I1128 04:07:22.437418  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:07:22.437496  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.440503  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.440937  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.440984  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.441148  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.441414  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441626  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441808  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.442043  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.442369  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.442386  388252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:07:22.778314  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:07:22.778344  388252 machine.go:91] provisioned docker machine in 876.457785ms
	I1128 04:07:22.778392  388252 start.go:300] post-start starting for "embed-certs-672176" (driver="kvm2")
	I1128 04:07:22.778413  388252 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:07:22.778463  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:22.778894  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:07:22.778934  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.781750  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782161  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.782203  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782336  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.782653  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.782870  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.783045  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.876530  388252 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:07:22.881442  388252 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 04:07:22.881472  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 04:07:22.881541  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 04:07:22.881618  388252 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 04:07:22.881701  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:07:22.891393  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:22.914734  388252 start.go:303] post-start completed in 136.316733ms
	I1128 04:07:22.914771  388252 fix.go:56] fixHost completed within 20.385588986s
	I1128 04:07:22.914800  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.917856  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918267  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.918301  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918449  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.918697  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.918898  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.919051  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.919230  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.919548  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.919561  388252 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 04:07:23.037790  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701144442.982632661
	
	I1128 04:07:23.037817  388252 fix.go:206] guest clock: 1701144442.982632661
	I1128 04:07:23.037828  388252 fix.go:219] Guest: 2023-11-28 04:07:22.982632661 +0000 UTC Remote: 2023-11-28 04:07:22.914776935 +0000 UTC m=+302.972189005 (delta=67.855726ms)
	I1128 04:07:23.037853  388252 fix.go:190] guest clock delta is within tolerance: 67.855726ms
	I1128 04:07:23.037860  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 20.508701455s
	I1128 04:07:23.037879  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.038196  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:23.040928  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041276  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.041309  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041473  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042009  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042217  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042315  388252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:07:23.042380  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.042447  388252 ssh_runner.go:195] Run: cat /version.json
	I1128 04:07:23.042479  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.045070  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045430  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.045459  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045478  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045634  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.045826  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.045987  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.045998  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.046020  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.046131  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.046197  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.046338  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.046455  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.046594  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.158653  388252 ssh_runner.go:195] Run: systemctl --version
	I1128 04:07:23.164496  388252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:07:23.313946  388252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 04:07:23.320220  388252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 04:07:23.320326  388252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:07:23.339262  388252 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 04:07:23.339296  388252 start.go:472] detecting cgroup driver to use...
	I1128 04:07:23.339401  388252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:07:23.352989  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:07:23.367735  388252 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:07:23.367797  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:07:23.382143  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:07:23.395983  388252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 04:07:23.513475  388252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:07:23.657449  388252 docker.go:219] disabling docker service ...
	I1128 04:07:23.657531  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:07:23.672662  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:07:23.685142  388252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:07:23.810404  388252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:07:23.929413  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:07:23.942971  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:07:23.961419  388252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 04:07:23.961493  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.971562  388252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 04:07:23.971643  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.981660  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.992472  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:24.002748  388252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 04:07:24.016234  388252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 04:07:24.025560  388252 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 04:07:24.025629  388252 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 04:07:24.039085  388252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 04:07:24.048324  388252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 04:07:24.160507  388252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 04:07:24.331205  388252 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 04:07:24.331292  388252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 04:07:24.336480  388252 start.go:540] Will wait 60s for crictl version
	I1128 04:07:24.336541  388252 ssh_runner.go:195] Run: which crictl
	I1128 04:07:24.341052  388252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 04:07:24.376784  388252 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 04:07:24.376910  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.425035  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.485230  388252 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 04:07:24.486822  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:24.490127  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490529  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:24.490558  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490733  388252 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1128 04:07:24.494881  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:24.510006  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:07:24.510097  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:24.549615  388252 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 04:07:24.549699  388252 ssh_runner.go:195] Run: which lz4
	I1128 04:07:24.554039  388252 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 04:07:24.558068  388252 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 04:07:24.558101  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 04:07:26.358503  388252 crio.go:444] Took 1.804493 seconds to copy over tarball
	I1128 04:07:26.358586  388252 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 04:07:29.679041  388252 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.320417818s)
	I1128 04:07:29.679072  388252 crio.go:451] Took 3.320535 seconds to extract the tarball
	I1128 04:07:29.679086  388252 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 04:07:29.723905  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:29.774544  388252 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 04:07:29.774574  388252 cache_images.go:84] Images are preloaded, skipping loading
	I1128 04:07:29.774683  388252 ssh_runner.go:195] Run: crio config
	I1128 04:07:29.841740  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:29.841767  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:29.841792  388252 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 04:07:29.841826  388252 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-672176 NodeName:embed-certs-672176 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 04:07:29.842004  388252 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-672176"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 04:07:29.842115  388252 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-672176 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 04:07:29.842184  388252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 04:07:29.854017  388252 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 04:07:29.854103  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 04:07:29.863871  388252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1128 04:07:29.880656  388252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 04:07:29.899138  388252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1128 04:07:29.919697  388252 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I1128 04:07:29.924087  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:29.936814  388252 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176 for IP: 192.168.72.208
	I1128 04:07:29.936851  388252 certs.go:190] acquiring lock for shared ca certs: {Name:mk57c0483467fb0022a439f1b546194ca653d1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:07:29.937053  388252 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key
	I1128 04:07:29.937097  388252 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key
	I1128 04:07:29.937198  388252 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/client.key
	I1128 04:07:29.937274  388252 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key.9e96c9f0
	I1128 04:07:29.937334  388252 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key
	I1128 04:07:29.937491  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem (1338 bytes)
	W1128 04:07:29.937524  388252 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515_empty.pem, impossibly tiny 0 bytes
	I1128 04:07:29.937535  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 04:07:29.937561  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem (1078 bytes)
	I1128 04:07:29.937586  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem (1123 bytes)
	I1128 04:07:29.937607  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem (1675 bytes)
	I1128 04:07:29.937698  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:29.938553  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 04:07:29.963444  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 04:07:29.988035  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 04:07:30.012981  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 04:07:30.219926  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 04:07:30.244077  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 04:07:30.268833  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 04:07:30.293921  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 04:07:30.322839  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /usr/share/ca-certificates/3405152.pem (1708 bytes)
	I1128 04:07:30.349783  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 04:07:30.374569  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem --> /usr/share/ca-certificates/340515.pem (1338 bytes)
	I1128 04:07:30.401804  388252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 04:07:30.420925  388252 ssh_runner.go:195] Run: openssl version
	I1128 04:07:30.427193  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3405152.pem && ln -fs /usr/share/ca-certificates/3405152.pem /etc/ssl/certs/3405152.pem"
	I1128 04:07:30.439369  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444359  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444455  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.451032  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3405152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 04:07:30.464110  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 04:07:30.477275  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483239  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483314  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.489884  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 04:07:30.501967  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340515.pem && ln -fs /usr/share/ca-certificates/340515.pem /etc/ssl/certs/340515.pem"
	I1128 04:07:30.514081  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519079  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519157  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.525194  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340515.pem /etc/ssl/certs/51391683.0"
	I1128 04:07:30.536594  388252 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 04:07:30.541041  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 04:07:30.547008  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 04:07:30.554317  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 04:07:30.561063  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 04:07:30.567355  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 04:07:30.573719  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 04:07:30.580010  388252 kubeadm.go:404] StartCluster: {Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:07:30.580166  388252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 04:07:30.580237  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:30.623908  388252 cri.go:89] found id: ""
	I1128 04:07:30.623980  388252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 04:07:30.635847  388252 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 04:07:30.635911  388252 kubeadm.go:636] restartCluster start
	I1128 04:07:30.635982  388252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 04:07:30.646523  388252 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.647648  388252 kubeconfig.go:92] found "embed-certs-672176" server: "https://192.168.72.208:8443"
	I1128 04:07:30.650037  388252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 04:07:30.660625  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.660703  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.674234  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.674258  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.674309  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.687276  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.188012  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.188122  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.201481  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.688057  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.688152  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.701564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.188188  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.201049  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.688113  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.688191  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.700824  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.187399  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.187517  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.200128  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.687562  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.687688  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.700564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.188276  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.188406  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.201686  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.688327  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.688426  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.701023  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.187672  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.187809  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.200598  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.688485  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.688565  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.701518  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.188131  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.188213  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.201708  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.688321  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.688430  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.701852  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.187395  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.187539  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.200267  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.688365  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.688447  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.701921  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.187456  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.187615  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.201388  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.687819  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.687933  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.700584  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.188195  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.201557  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.688192  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.688268  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.700990  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.187806  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:40.187918  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:40.201110  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.660853  388252 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 04:07:40.660908  388252 kubeadm.go:1128] stopping kube-system containers ...
	I1128 04:07:40.660926  388252 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 04:07:40.661008  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:40.706945  388252 cri.go:89] found id: ""
	I1128 04:07:40.707017  388252 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 04:07:40.724988  388252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:07:40.735077  388252 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:07:40.735165  388252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745110  388252 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745146  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:40.870777  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:41.851187  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.047008  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.129329  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.194986  388252 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:07:42.195074  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.210225  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.727622  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.227063  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.726928  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.227709  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.727790  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.756952  388252 api_server.go:72] duration metric: took 2.561964065s to wait for apiserver process to appear ...
	I1128 04:07:44.756989  388252 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:07:44.757011  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.757778  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:44.757838  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.758268  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:45.258785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.416741  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.416771  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.416785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.484252  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.484292  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.758607  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.765159  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:49.765189  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.258770  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.264464  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.264499  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.759164  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.765206  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.765246  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:51.258591  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:51.264758  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I1128 04:07:51.274077  388252 api_server.go:141] control plane version: v1.28.4
	I1128 04:07:51.274110  388252 api_server.go:131] duration metric: took 6.517112692s to wait for apiserver health ...
	I1128 04:07:51.274122  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:51.274130  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:51.276088  388252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:07:51.277582  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:07:51.302050  388252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:07:51.355400  388252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:07:51.371543  388252 system_pods.go:59] 8 kube-system pods found
	I1128 04:07:51.371592  388252 system_pods.go:61] "coredns-5dd5756b68-296l9" [a79e060e-b757-46b9-882e-5f065aed0f46] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 04:07:51.371605  388252 system_pods.go:61] "etcd-embed-certs-672176" [610938df-5b75-4fef-b632-19af73d74dab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 04:07:51.371623  388252 system_pods.go:61] "kube-apiserver-embed-certs-672176" [3e513b84-29f4-4285-aea3-963078fa9e74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 04:07:51.371633  388252 system_pods.go:61] "kube-controller-manager-embed-certs-672176" [6fb9a912-0c05-47d1-8420-26d0bbbe92c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 04:07:51.371640  388252 system_pods.go:61] "kube-proxy-4cvwh" [9882c0aa-5c66-4b53-8c8e-827c1cddaac5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 04:07:51.371652  388252 system_pods.go:61] "kube-scheduler-embed-certs-672176" [2d7c706d-f01b-4e80-ba35-8ef97f27faa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 04:07:51.371659  388252 system_pods.go:61] "metrics-server-57f55c9bc5-sbkpc" [ea558db5-2aab-4e1e-aa62-a4595172d108] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:07:51.371666  388252 system_pods.go:61] "storage-provisioner" [96737dd7-931e-4ac5-b662-c560a4b6642e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 04:07:51.371676  388252 system_pods.go:74] duration metric: took 16.247766ms to wait for pod list to return data ...
	I1128 04:07:51.371694  388252 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:07:51.376458  388252 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:07:51.376495  388252 node_conditions.go:123] node cpu capacity is 2
	I1128 04:07:51.376508  388252 node_conditions.go:105] duration metric: took 4.80925ms to run NodePressure ...
	I1128 04:07:51.376539  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:51.778110  388252 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 04:07:51.786916  388252 kubeadm.go:787] kubelet initialised
	I1128 04:07:51.787002  388252 kubeadm.go:788] duration metric: took 8.859672ms waiting for restarted kubelet to initialise ...
	I1128 04:07:51.787019  388252 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:07:51.799380  388252 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.807214  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807261  388252 pod_ready.go:81] duration metric: took 7.829357ms waiting for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.807274  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807299  388252 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.814516  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814550  388252 pod_ready.go:81] duration metric: took 7.235029ms waiting for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.814569  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814576  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.827729  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827759  388252 pod_ready.go:81] duration metric: took 13.172422ms waiting for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.827768  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827774  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:54.190842  388252 pod_ready.go:102] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:07:56.189656  388252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.189758  388252 pod_ready.go:81] duration metric: took 4.36196703s waiting for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.189779  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196462  388252 pod_ready.go:92] pod "kube-proxy-4cvwh" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.196503  388252 pod_ready.go:81] duration metric: took 6.707028ms waiting for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196517  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:58.590819  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:00.590953  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:02.595296  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:04.592801  388252 pod_ready.go:92] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:08:04.592826  388252 pod_ready.go:81] duration metric: took 8.396301174s waiting for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:04.592839  388252 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:06.618794  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:08.619204  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:11.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:13.618160  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:15.619404  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:17.620107  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:20.118789  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:22.119626  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:24.619088  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:26.619353  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:29.118548  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:31.118625  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:33.122964  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:35.620077  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:38.118800  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:40.618996  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:42.619252  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:45.118801  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:47.118987  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:49.619233  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:52.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:54.120044  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:56.619768  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:59.119321  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:01.119784  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:03.619289  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:06.119695  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:08.618767  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:10.620952  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:13.119086  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:15.121912  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:17.618200  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:19.619428  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:22.117316  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:24.118147  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:26.119945  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:28.619687  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:30.619772  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:33.118414  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:35.622173  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:38.118091  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:40.118723  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:42.119551  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:44.119931  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:46.619572  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:48.620898  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:51.118343  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:53.619215  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:56.119440  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:58.620299  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:01.118313  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:03.618615  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:05.619056  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:07.622475  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:10.117858  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:12.119468  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:14.619203  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:16.619540  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:19.118749  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:21.619618  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:23.620623  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:26.118183  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:28.118246  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:30.618282  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:33.117841  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:35.122904  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:37.619116  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:40.118304  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:42.618264  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:44.621653  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:47.119733  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:49.618284  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:51.619099  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:54.118728  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:56.121041  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:58.618237  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:00.619430  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:03.119263  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:05.619558  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:07.620571  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:10.117924  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:12.118001  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:14.119916  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:16.618621  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:18.620149  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:21.118296  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:23.118614  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:25.119100  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:27.120549  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:29.618264  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:32.119075  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:34.619939  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:37.119561  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:39.119896  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:41.617842  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:43.618594  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:45.618757  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:47.619342  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:49.623012  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:52.119438  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:54.121760  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:56.620252  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:59.120191  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:01.618305  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:03.619616  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:04.593067  388252 pod_ready.go:81] duration metric: took 4m0.000190987s waiting for pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace to be "Ready" ...
	E1128 04:12:04.593121  388252 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 04:12:04.593139  388252 pod_ready.go:38] duration metric: took 4m12.806107308s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:04.593168  388252 kubeadm.go:640] restartCluster took 4m33.957247441s
	W1128 04:12:04.593251  388252 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 04:12:04.593282  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 04:12:18.614553  388252 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.021224516s)
	I1128 04:12:18.614653  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:18.628836  388252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:12:18.640242  388252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:12:18.649879  388252 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:12:18.649930  388252 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 04:12:18.702438  388252 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 04:12:18.702606  388252 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:12:18.867279  388252 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:12:18.867400  388252 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:12:18.867534  388252 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:12:19.120397  388252 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:12:19.122246  388252 out.go:204]   - Generating certificates and keys ...
	I1128 04:12:19.122357  388252 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:12:19.122474  388252 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:12:19.122646  388252 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:12:19.122757  388252 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:12:19.122856  388252 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:12:19.122934  388252 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:12:19.123028  388252 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:12:19.123173  388252 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:12:19.123270  388252 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:12:19.123380  388252 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:12:19.123435  388252 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:12:19.123517  388252 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:12:19.397687  388252 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:12:19.545433  388252 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:12:19.753655  388252 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:12:19.867889  388252 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:12:19.868510  388252 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:12:19.873288  388252 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:12:19.875099  388252 out.go:204]   - Booting up control plane ...
	I1128 04:12:19.875243  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:12:19.875362  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:12:19.875447  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:12:19.890902  388252 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:12:19.891790  388252 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:12:19.891903  388252 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:12:20.033327  388252 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:12:28.539450  388252 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.505311 seconds
	I1128 04:12:28.539554  388252 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:12:28.556290  388252 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:12:29.115246  388252 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:12:29.115517  388252 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-672176 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:12:29.632584  388252 kubeadm.go:322] [bootstrap-token] Using token: fhdku8.6c57fpjso9w7rrxv
	I1128 04:12:29.634185  388252 out.go:204]   - Configuring RBAC rules ...
	I1128 04:12:29.634320  388252 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:12:29.640994  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:12:29.653566  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:12:29.660519  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:12:29.665018  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:12:29.677514  388252 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:12:29.691421  388252 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:12:29.939496  388252 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:12:30.049393  388252 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:12:30.049425  388252 kubeadm.go:322] 
	I1128 04:12:30.049538  388252 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:12:30.049559  388252 kubeadm.go:322] 
	I1128 04:12:30.049652  388252 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:12:30.049683  388252 kubeadm.go:322] 
	I1128 04:12:30.049721  388252 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:12:30.049806  388252 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:12:30.049876  388252 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:12:30.049884  388252 kubeadm.go:322] 
	I1128 04:12:30.049983  388252 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:12:30.050004  388252 kubeadm.go:322] 
	I1128 04:12:30.050076  388252 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:12:30.050088  388252 kubeadm.go:322] 
	I1128 04:12:30.050145  388252 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:12:30.050234  388252 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:12:30.050337  388252 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:12:30.050347  388252 kubeadm.go:322] 
	I1128 04:12:30.050444  388252 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:12:30.050532  388252 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:12:30.050539  388252 kubeadm.go:322] 
	I1128 04:12:30.050633  388252 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fhdku8.6c57fpjso9w7rrxv \
	I1128 04:12:30.050753  388252 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:12:30.050784  388252 kubeadm.go:322] 	--control-plane 
	I1128 04:12:30.050790  388252 kubeadm.go:322] 
	I1128 04:12:30.050888  388252 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:12:30.050898  388252 kubeadm.go:322] 
	I1128 04:12:30.050994  388252 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fhdku8.6c57fpjso9w7rrxv \
	I1128 04:12:30.051118  388252 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:12:30.051556  388252 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:12:30.051597  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:12:30.051611  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:12:30.053491  388252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:12:30.055147  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:12:30.088905  388252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:12:30.132297  388252 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:12:30.132365  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=embed-certs-672176 minikube.k8s.io/updated_at=2023_11_28T04_12_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.132370  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.459401  388252 ops.go:34] apiserver oom_adj: -16
	I1128 04:12:30.459555  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.568049  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:31.166991  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:31.666953  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:32.167174  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:32.666615  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:33.166464  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:33.667438  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:34.167422  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:34.666474  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:35.167309  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:35.667310  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:36.166896  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:36.667030  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:37.167265  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:37.667172  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:38.166893  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:38.667196  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:39.166889  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:39.667205  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:40.167112  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:40.667377  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:41.167422  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:41.666650  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:42.167425  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:42.308007  388252 kubeadm.go:1081] duration metric: took 12.175710221s to wait for elevateKubeSystemPrivileges.
	I1128 04:12:42.308051  388252 kubeadm.go:406] StartCluster complete in 5m11.728054603s
	I1128 04:12:42.308070  388252 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:12:42.308149  388252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:12:42.310104  388252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:12:42.310352  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:12:42.310440  388252 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:12:42.310557  388252 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-672176"
	I1128 04:12:42.310581  388252 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-672176"
	W1128 04:12:42.310588  388252 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:12:42.310601  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:12:42.310668  388252 addons.go:69] Setting default-storageclass=true in profile "embed-certs-672176"
	I1128 04:12:42.310684  388252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-672176"
	I1128 04:12:42.310698  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.311002  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311040  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.311081  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311113  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.311110  388252 addons.go:69] Setting metrics-server=true in profile "embed-certs-672176"
	I1128 04:12:42.311127  388252 addons.go:231] Setting addon metrics-server=true in "embed-certs-672176"
	W1128 04:12:42.311134  388252 addons.go:240] addon metrics-server should already be in state true
	I1128 04:12:42.311167  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.311539  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311584  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.328327  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I1128 04:12:42.328769  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.329061  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I1128 04:12:42.329541  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.329720  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.329731  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.329740  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1128 04:12:42.330179  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.330195  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.330193  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.330557  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.330572  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.330768  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.331035  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.331050  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.331073  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.331151  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.331476  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.332248  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.332359  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.334824  388252 addons.go:231] Setting addon default-storageclass=true in "embed-certs-672176"
	W1128 04:12:42.334849  388252 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:12:42.334882  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.335253  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.335333  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.352633  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40133
	I1128 04:12:42.353356  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.353736  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37797
	I1128 04:12:42.353967  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.353982  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.354364  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.354559  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.355670  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37125
	I1128 04:12:42.355716  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.356215  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.356764  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.356808  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.356772  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.356965  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.356984  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.359122  388252 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:12:42.357414  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.357431  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.360619  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:12:42.360666  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:12:42.360695  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.360632  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.360981  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.361031  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.362951  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.365190  388252 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:12:42.364654  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.365222  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.365254  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.365285  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.365431  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.367020  388252 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:12:42.367079  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:12:42.367146  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.367154  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.367365  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.370570  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.371152  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.371177  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.371181  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.371352  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.371712  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.371881  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.381549  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I1128 04:12:42.382167  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.382667  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.382726  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.383173  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.383387  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.384921  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.385265  388252 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:12:42.385284  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:12:42.385305  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.388576  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.389134  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.389197  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.389203  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.389439  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.389617  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.389783  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.513762  388252 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-672176" context rescaled to 1 replicas
	I1128 04:12:42.513815  388252 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:12:42.515768  388252 out.go:177] * Verifying Kubernetes components...
	I1128 04:12:42.517584  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:42.565623  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:12:42.565648  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:12:42.583220  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:12:42.591345  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:12:42.596578  388252 node_ready.go:35] waiting up to 6m0s for node "embed-certs-672176" to be "Ready" ...
	I1128 04:12:42.596679  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:12:42.615808  388252 node_ready.go:49] node "embed-certs-672176" has status "Ready":"True"
	I1128 04:12:42.615836  388252 node_ready.go:38] duration metric: took 19.228862ms waiting for node "embed-certs-672176" to be "Ready" ...
	I1128 04:12:42.615848  388252 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:42.637885  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:12:42.637913  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:12:42.667328  388252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:42.863842  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:12:42.863897  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:12:42.947911  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:12:44.507109  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.923846344s)
	I1128 04:12:44.507207  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.507227  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.507634  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.507655  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.507667  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.507677  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.509371  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:44.509455  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.509479  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.585867  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.585899  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.586220  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.586243  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.586371  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:44.829833  388252 pod_ready.go:102] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:45.125413  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.534026387s)
	I1128 04:12:45.125477  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.125492  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.125490  388252 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.528780545s)
	I1128 04:12:45.125516  388252 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1128 04:12:45.125839  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.125859  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.125874  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.125883  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.126171  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.126184  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.126201  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.429252  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.481263549s)
	I1128 04:12:45.429311  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.429327  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.429703  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.429772  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.429787  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.429797  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.429727  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.430078  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.430119  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.430135  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.430149  388252 addons.go:467] Verifying addon metrics-server=true in "embed-certs-672176"
	I1128 04:12:45.432135  388252 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 04:12:45.433222  388252 addons.go:502] enable addons completed in 3.122792003s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 04:12:46.830144  388252 pod_ready.go:102] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:47.831025  388252 pod_ready.go:92] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.831057  388252 pod_ready.go:81] duration metric: took 5.163697448s waiting for pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.831067  388252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.837379  388252 pod_ready.go:92] pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.837400  388252 pod_ready.go:81] duration metric: took 6.325699ms waiting for pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.837411  388252 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.842711  388252 pod_ready.go:92] pod "etcd-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.842736  388252 pod_ready.go:81] duration metric: took 5.316988ms waiting for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.842744  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.848771  388252 pod_ready.go:92] pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.848792  388252 pod_ready.go:81] duration metric: took 6.042201ms waiting for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.848801  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.854704  388252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.854729  388252 pod_ready.go:81] duration metric: took 5.922154ms waiting for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.854737  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7srf" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.227290  388252 pod_ready.go:92] pod "kube-proxy-q7srf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:48.227318  388252 pod_ready.go:81] duration metric: took 372.573682ms waiting for pod "kube-proxy-q7srf" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.227331  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.627054  388252 pod_ready.go:92] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:48.627088  388252 pod_ready.go:81] duration metric: took 399.749681ms waiting for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.627097  388252 pod_ready.go:38] duration metric: took 6.011238284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:48.627114  388252 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:12:48.627164  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:12:48.645283  388252 api_server.go:72] duration metric: took 6.131420029s to wait for apiserver process to appear ...
	I1128 04:12:48.645317  388252 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:12:48.645345  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:12:48.651616  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I1128 04:12:48.653231  388252 api_server.go:141] control plane version: v1.28.4
	I1128 04:12:48.653252  388252 api_server.go:131] duration metric: took 7.92759ms to wait for apiserver health ...
	I1128 04:12:48.653262  388252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:12:48.831400  388252 system_pods.go:59] 9 kube-system pods found
	I1128 04:12:48.831430  388252 system_pods.go:61] "coredns-5dd5756b68-48xtx" [1229f57f-a420-4c63-ae05-8a051f556bbd] Running
	I1128 04:12:48.831435  388252 system_pods.go:61] "coredns-5dd5756b68-qws7p" [19e86a95-23a4-4222-955d-9c560db64c80] Running
	I1128 04:12:48.831439  388252 system_pods.go:61] "etcd-embed-certs-672176" [6591bb2b-2d10-4f8b-9d1a-919b39590717] Running
	I1128 04:12:48.831443  388252 system_pods.go:61] "kube-apiserver-embed-certs-672176" [0ddbb8ba-804f-43ef-a803-62570732f165] Running
	I1128 04:12:48.831447  388252 system_pods.go:61] "kube-controller-manager-embed-certs-672176" [8dcb6ffa-1e26-420f-b385-e145cf24282a] Running
	I1128 04:12:48.831451  388252 system_pods.go:61] "kube-proxy-q7srf" [a2390c61-7f2a-40ac-ad4c-c47e78a3eb90] Running
	I1128 04:12:48.831454  388252 system_pods.go:61] "kube-scheduler-embed-certs-672176" [973e06dd-2716-40fe-99ed-cf7844cd22b7] Running
	I1128 04:12:48.831461  388252 system_pods.go:61] "metrics-server-57f55c9bc5-ppnxv" [1c86fe3d-4460-4777-a7d7-57b1f6aad5f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:12:48.831466  388252 system_pods.go:61] "storage-provisioner" [3304cb38-897a-482f-9a9d-9e287aca2ce4] Running
	I1128 04:12:48.831473  388252 system_pods.go:74] duration metric: took 178.206375ms to wait for pod list to return data ...
	I1128 04:12:48.831481  388252 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:12:49.027724  388252 default_sa.go:45] found service account: "default"
	I1128 04:12:49.027754  388252 default_sa.go:55] duration metric: took 196.266769ms for default service account to be created ...
	I1128 04:12:49.027762  388252 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:12:49.231633  388252 system_pods.go:86] 9 kube-system pods found
	I1128 04:12:49.231663  388252 system_pods.go:89] "coredns-5dd5756b68-48xtx" [1229f57f-a420-4c63-ae05-8a051f556bbd] Running
	I1128 04:12:49.231669  388252 system_pods.go:89] "coredns-5dd5756b68-qws7p" [19e86a95-23a4-4222-955d-9c560db64c80] Running
	I1128 04:12:49.231673  388252 system_pods.go:89] "etcd-embed-certs-672176" [6591bb2b-2d10-4f8b-9d1a-919b39590717] Running
	I1128 04:12:49.231677  388252 system_pods.go:89] "kube-apiserver-embed-certs-672176" [0ddbb8ba-804f-43ef-a803-62570732f165] Running
	I1128 04:12:49.231682  388252 system_pods.go:89] "kube-controller-manager-embed-certs-672176" [8dcb6ffa-1e26-420f-b385-e145cf24282a] Running
	I1128 04:12:49.231687  388252 system_pods.go:89] "kube-proxy-q7srf" [a2390c61-7f2a-40ac-ad4c-c47e78a3eb90] Running
	I1128 04:12:49.231691  388252 system_pods.go:89] "kube-scheduler-embed-certs-672176" [973e06dd-2716-40fe-99ed-cf7844cd22b7] Running
	I1128 04:12:49.231697  388252 system_pods.go:89] "metrics-server-57f55c9bc5-ppnxv" [1c86fe3d-4460-4777-a7d7-57b1f6aad5f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:12:49.231702  388252 system_pods.go:89] "storage-provisioner" [3304cb38-897a-482f-9a9d-9e287aca2ce4] Running
	I1128 04:12:49.231712  388252 system_pods.go:126] duration metric: took 203.944338ms to wait for k8s-apps to be running ...
	I1128 04:12:49.231724  388252 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:12:49.231781  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:49.247634  388252 system_svc.go:56] duration metric: took 15.898994ms WaitForService to wait for kubelet.
	I1128 04:12:49.247662  388252 kubeadm.go:581] duration metric: took 6.733807391s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:12:49.247681  388252 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:12:49.426882  388252 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:12:49.426916  388252 node_conditions.go:123] node cpu capacity is 2
	I1128 04:12:49.426931  388252 node_conditions.go:105] duration metric: took 179.246183ms to run NodePressure ...
	I1128 04:12:49.426946  388252 start.go:228] waiting for startup goroutines ...
	I1128 04:12:49.426954  388252 start.go:233] waiting for cluster config update ...
	I1128 04:12:49.426965  388252 start.go:242] writing updated cluster config ...
	I1128 04:12:49.427242  388252 ssh_runner.go:195] Run: rm -f paused
	I1128 04:12:49.477142  388252 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:12:49.479448  388252 out.go:177] * Done! kubectl is now configured to use "embed-certs-672176" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 03:57:20 UTC, ends at Tue 2023-11-28 04:17:45 UTC. --
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.200200114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145065200189855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f2c7befd-4b45-428f-87ad-cd51bb0f05de name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.200785472Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2eefdbf1-9545-47cc-8803-6aa80611f954 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.200831123Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2eefdbf1-9545-47cc-8803-6aa80611f954 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.201036654Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701143910928585218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:603734d47a89fc8412409020cf1963bed92f2194265626114efe26478defef0e,PodSandboxId:43660dd16af48203ea06d886b46f4f7b8eb9fb1b1d9161ea7c12b2abf4307511,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701143888388164565,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74311fc7-06a5-4161-8803-f0ff8bf14071,},Annotations:map[string]string{io.kubernetes.container.hash: aadc8863,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371,PodSandboxId:f242bc7227c3cee21092d232805479d93e0693ea7f9cb7c76b426f8ffb11c221,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701143887326667328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5pf9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f,},Annotations:map[string]string{io.kubernetes.container.hash: a223f807,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701143880061507130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7,PodSandboxId:0c1d33643e6bb92d0e3e511b57c1a43a5740fbd605f33c86180ba3b796dcddd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701143880007941250,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sp9nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
54c0c14-5531-417f-8ce9-547c4bc9c9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 95100269,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58,PodSandboxId:2a57f714a961f291f47ce194ad330aa0badc719d7430fd8d69da7d1cbdb75c12,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701143873288217575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3299c0250acac00f1296eb7f1ff28d,},An
notations:map[string]string{io.kubernetes.container.hash: 6850a9ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe,PodSandboxId:a2204d42ef00c55ed3c47ec0b7f04e5b2b57a4f5ff89847f5f09673d25d1eb5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701143873201657779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e06547bea8addecb08d9ab4c2c3384,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be,PodSandboxId:a183800045b25df89f76001936af7188a3a2b4ae5cfbdf5be1846c94ae6052b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701143872885636361,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89490cdb2aefb35198720f14b435f087,},An
notations:map[string]string{io.kubernetes.container.hash: ff69feba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639,PodSandboxId:eac1d0b2f521531b3826108aaa857c4dc70ce03d5768b4a9e900a43168947cb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701143872508468883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
7fadb1204004b279b9d2aaedce5fe68,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2eefdbf1-9545-47cc-8803-6aa80611f954 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.243636297Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=19b64ddf-145c-40fb-bfeb-4f7dd6ff58dd name=/runtime.v1.RuntimeService/Version
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.243694423Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=19b64ddf-145c-40fb-bfeb-4f7dd6ff58dd name=/runtime.v1.RuntimeService/Version
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.245083857Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ebe77f59-c87b-4d60-8b58-bdc00bc7cabd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.245553923Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145065245540068,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=ebe77f59-c87b-4d60-8b58-bdc00bc7cabd name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.246196367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8c991608-438b-4857-8e3b-8d1ec75e5af9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.246241710Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8c991608-438b-4857-8e3b-8d1ec75e5af9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.246501960Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701143910928585218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:603734d47a89fc8412409020cf1963bed92f2194265626114efe26478defef0e,PodSandboxId:43660dd16af48203ea06d886b46f4f7b8eb9fb1b1d9161ea7c12b2abf4307511,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701143888388164565,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74311fc7-06a5-4161-8803-f0ff8bf14071,},Annotations:map[string]string{io.kubernetes.container.hash: aadc8863,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371,PodSandboxId:f242bc7227c3cee21092d232805479d93e0693ea7f9cb7c76b426f8ffb11c221,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701143887326667328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5pf9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f,},Annotations:map[string]string{io.kubernetes.container.hash: a223f807,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701143880061507130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7,PodSandboxId:0c1d33643e6bb92d0e3e511b57c1a43a5740fbd605f33c86180ba3b796dcddd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701143880007941250,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sp9nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
54c0c14-5531-417f-8ce9-547c4bc9c9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 95100269,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58,PodSandboxId:2a57f714a961f291f47ce194ad330aa0badc719d7430fd8d69da7d1cbdb75c12,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701143873288217575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3299c0250acac00f1296eb7f1ff28d,},An
notations:map[string]string{io.kubernetes.container.hash: 6850a9ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe,PodSandboxId:a2204d42ef00c55ed3c47ec0b7f04e5b2b57a4f5ff89847f5f09673d25d1eb5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701143873201657779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e06547bea8addecb08d9ab4c2c3384,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be,PodSandboxId:a183800045b25df89f76001936af7188a3a2b4ae5cfbdf5be1846c94ae6052b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701143872885636361,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89490cdb2aefb35198720f14b435f087,},An
notations:map[string]string{io.kubernetes.container.hash: ff69feba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639,PodSandboxId:eac1d0b2f521531b3826108aaa857c4dc70ce03d5768b4a9e900a43168947cb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701143872508468883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
7fadb1204004b279b9d2aaedce5fe68,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8c991608-438b-4857-8e3b-8d1ec75e5af9 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.288242131Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ca41c9dd-c7f2-4e25-8600-9500ba656a77 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.288363672Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ca41c9dd-c7f2-4e25-8600-9500ba656a77 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.290236745Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=816bd9a7-11e4-4a1a-9521-c905d16d83ba name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.290699901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145065290666808,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=816bd9a7-11e4-4a1a-9521-c905d16d83ba name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.291386190Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1c6518bb-94d8-47e6-b4e2-05fb1269e642 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.291433607Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1c6518bb-94d8-47e6-b4e2-05fb1269e642 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.291618765Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701143910928585218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:603734d47a89fc8412409020cf1963bed92f2194265626114efe26478defef0e,PodSandboxId:43660dd16af48203ea06d886b46f4f7b8eb9fb1b1d9161ea7c12b2abf4307511,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701143888388164565,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74311fc7-06a5-4161-8803-f0ff8bf14071,},Annotations:map[string]string{io.kubernetes.container.hash: aadc8863,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371,PodSandboxId:f242bc7227c3cee21092d232805479d93e0693ea7f9cb7c76b426f8ffb11c221,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701143887326667328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5pf9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f,},Annotations:map[string]string{io.kubernetes.container.hash: a223f807,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701143880061507130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7,PodSandboxId:0c1d33643e6bb92d0e3e511b57c1a43a5740fbd605f33c86180ba3b796dcddd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701143880007941250,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sp9nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
54c0c14-5531-417f-8ce9-547c4bc9c9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 95100269,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58,PodSandboxId:2a57f714a961f291f47ce194ad330aa0badc719d7430fd8d69da7d1cbdb75c12,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701143873288217575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3299c0250acac00f1296eb7f1ff28d,},An
notations:map[string]string{io.kubernetes.container.hash: 6850a9ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe,PodSandboxId:a2204d42ef00c55ed3c47ec0b7f04e5b2b57a4f5ff89847f5f09673d25d1eb5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701143873201657779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e06547bea8addecb08d9ab4c2c3384,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be,PodSandboxId:a183800045b25df89f76001936af7188a3a2b4ae5cfbdf5be1846c94ae6052b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701143872885636361,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89490cdb2aefb35198720f14b435f087,},An
notations:map[string]string{io.kubernetes.container.hash: ff69feba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639,PodSandboxId:eac1d0b2f521531b3826108aaa857c4dc70ce03d5768b4a9e900a43168947cb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701143872508468883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
7fadb1204004b279b9d2aaedce5fe68,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1c6518bb-94d8-47e6-b4e2-05fb1269e642 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.327799637Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f7fdd597-cbcc-42b3-854a-9117bf55a6dc name=/runtime.v1.RuntimeService/Version
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.327858687Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f7fdd597-cbcc-42b3-854a-9117bf55a6dc name=/runtime.v1.RuntimeService/Version
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.329570615Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0f91e658-7e01-4c6a-ae69-78962cbd6e2e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.329935223Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145065329924007,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0f91e658-7e01-4c6a-ae69-78962cbd6e2e name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.330754235Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=51e9fa3b-d8f7-47b1-b289-3c5bad74a3f0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.330797711Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=51e9fa3b-d8f7-47b1-b289-3c5bad74a3f0 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:45 default-k8s-diff-port-725962 crio[697]: time="2023-11-28 04:17:45.330976997Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701143910928585218,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount
: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:603734d47a89fc8412409020cf1963bed92f2194265626114efe26478defef0e,PodSandboxId:43660dd16af48203ea06d886b46f4f7b8eb9fb1b1d9161ea7c12b2abf4307511,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1701143888388164565,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 74311fc7-06a5-4161-8803-f0ff8bf14071,},Annotations:map[string]string{io.kubernetes.container.hash: aadc8863,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371,PodSandboxId:f242bc7227c3cee21092d232805479d93e0693ea7f9cb7c76b426f8ffb11c221,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701143887326667328,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-5pf9p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f,},Annotations:map[string]string{io.kubernetes.container.hash: a223f807,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e,PodSandboxId:f00e09ac58f21959f8a1b56b68264b6d40341c94334898150861ad3211d7bf4c,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1701143880061507130,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: k
ube-system,io.kubernetes.pod.uid: 074eb0a7-45ef-4b63-9068-e061637207f7,},Annotations:map[string]string{io.kubernetes.container.hash: f57bad1c,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7,PodSandboxId:0c1d33643e6bb92d0e3e511b57c1a43a5740fbd605f33c86180ba3b796dcddd2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701143880007941250,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-sp9nc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b
54c0c14-5531-417f-8ce9-547c4bc9c9cf,},Annotations:map[string]string{io.kubernetes.container.hash: 95100269,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58,PodSandboxId:2a57f714a961f291f47ce194ad330aa0badc719d7430fd8d69da7d1cbdb75c12,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701143873288217575,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6e3299c0250acac00f1296eb7f1ff28d,},An
notations:map[string]string{io.kubernetes.container.hash: 6850a9ac,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe,PodSandboxId:a2204d42ef00c55ed3c47ec0b7f04e5b2b57a4f5ff89847f5f09673d25d1eb5f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701143873201657779,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6e06547bea8addecb08d9ab4c2c3384,},An
notations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be,PodSandboxId:a183800045b25df89f76001936af7188a3a2b4ae5cfbdf5be1846c94ae6052b2,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701143872885636361,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 89490cdb2aefb35198720f14b435f087,},An
notations:map[string]string{io.kubernetes.container.hash: ff69feba,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639,PodSandboxId:eac1d0b2f521531b3826108aaa857c4dc70ce03d5768b4a9e900a43168947cb2,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701143872508468883,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-725962,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
7fadb1204004b279b9d2aaedce5fe68,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=51e9fa3b-d8f7-47b1-b289-3c5bad74a3f0 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1806bf0461d3c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       3                   f00e09ac58f21       storage-provisioner
	603734d47a89f       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   43660dd16af48       busybox
	4f1b83cb6065a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      19 minutes ago      Running             coredns                   1                   f242bc7227c3c       coredns-5dd5756b68-5pf9p
	ef25aa6706867       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Exited              storage-provisioner       2                   f00e09ac58f21       storage-provisioner
	3c249ebac5ace       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      19 minutes ago      Running             kube-proxy                1                   0c1d33643e6bb       kube-proxy-sp9nc
	39b2c5787e96c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      19 minutes ago      Running             etcd                      1                   2a57f714a961f       etcd-default-k8s-diff-port-725962
	09e3428759987       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      19 minutes ago      Running             kube-scheduler            1                   a2204d42ef00c       kube-scheduler-default-k8s-diff-port-725962
	d962ca3c6d6a3       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      19 minutes ago      Running             kube-apiserver            1                   a183800045b25       kube-apiserver-default-k8s-diff-port-725962
	59767f5d5ca26       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      19 minutes ago      Running             kube-controller-manager   1                   eac1d0b2f5215       kube-controller-manager-default-k8s-diff-port-725962
	
	* 
	* ==> coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47928 - 62110 "HINFO IN 358128015453795916.2116480082888628902. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.027807955s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-725962
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-725962
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=default-k8s-diff-port-725962
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T03_48_45_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 03:48:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-725962
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 04:17:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 04:13:48 +0000   Tue, 28 Nov 2023 03:48:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 04:13:48 +0000   Tue, 28 Nov 2023 03:48:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 04:13:48 +0000   Tue, 28 Nov 2023 03:48:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 04:13:48 +0000   Tue, 28 Nov 2023 03:58:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.13
	  Hostname:    default-k8s-diff-port-725962
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 844aae334ccf47b7b0357768a02d626f
	  System UUID:                844aae33-4ccf-47b7-b035-7768a02d626f
	  Boot ID:                    7fe44eff-bca9-43e5-852a-449b02c0b7ca
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 coredns-5dd5756b68-5pf9p                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-default-k8s-diff-port-725962                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-725962             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-725962    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-sp9nc                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-725962             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-9bqg8                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-725962 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-725962 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-725962 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-725962 status is now: NodeReady
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-725962 event: Registered Node default-k8s-diff-port-725962 in Controller
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-725962 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-725962 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node default-k8s-diff-port-725962 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node default-k8s-diff-port-725962 event: Registered Node default-k8s-diff-port-725962 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov28 03:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.077956] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.869783] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.824258] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154369] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.476450] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.862911] systemd-fstab-generator[621]: Ignoring "noauto" for root device
	[  +0.113655] systemd-fstab-generator[632]: Ignoring "noauto" for root device
	[  +0.155484] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.138605] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.228899] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[ +17.681888] systemd-fstab-generator[898]: Ignoring "noauto" for root device
	[Nov28 03:58] kauditd_printk_skb: 19 callbacks suppressed
	
	* 
	* ==> etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] <==
	* {"level":"warn","ts":"2023-11-28T03:58:29.461034Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T03:58:29.117713Z","time spent":"343.271106ms","remote":"127.0.0.1:58292","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":600,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-725962\" mod_revision:573 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-725962\" value_size:531 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-725962\" > >"}
	{"level":"info","ts":"2023-11-28T03:58:29.461115Z","caller":"traceutil/trace.go:171","msg":"trace[998512203] transaction","detail":"{read_only:false; response_revision:582; number_of_response:1; }","duration":"162.032768ms","start":"2023-11-28T03:58:29.299074Z","end":"2023-11-28T03:58:29.461106Z","steps":["trace[998512203] 'process raft request'  (duration: 161.581345ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T03:58:29.751646Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.956617ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1630738557940094687 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:16a18c14138d56de>","response":"size:41"}
	{"level":"info","ts":"2023-11-28T03:58:29.751848Z","caller":"traceutil/trace.go:171","msg":"trace[108578154] linearizableReadLoop","detail":"{readStateIndex:628; appliedIndex:627; }","duration":"206.895236ms","start":"2023-11-28T03:58:29.544938Z","end":"2023-11-28T03:58:29.751833Z","steps":["trace[108578154] 'read index received'  (duration: 43.685145ms)","trace[108578154] 'applied index is now lower than readState.Index'  (duration: 163.208302ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-28T03:58:29.751967Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"207.035416ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-28T03:58:29.752018Z","caller":"traceutil/trace.go:171","msg":"trace[381543735] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:582; }","duration":"207.093404ms","start":"2023-11-28T03:58:29.544914Z","end":"2023-11-28T03:58:29.752008Z","steps":["trace[381543735] 'agreement among raft nodes before linearized reading'  (duration: 206.993139ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T03:58:29.997133Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"125.289108ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-9bqg8\" ","response":"range_response_count:1 size:4036"}
	{"level":"info","ts":"2023-11-28T03:58:29.997233Z","caller":"traceutil/trace.go:171","msg":"trace[1330111037] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-9bqg8; range_end:; response_count:1; response_revision:583; }","duration":"125.397543ms","start":"2023-11-28T03:58:29.871811Z","end":"2023-11-28T03:58:29.997209Z","steps":["trace[1330111037] 'range keys from in-memory index tree'  (duration: 125.006097ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T04:07:30.302018Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"526.535382ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1630738557940098037 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.61.13\" mod_revision:1028 > success:<request_put:<key:\"/registry/masterleases/192.168.61.13\" value_size:66 lease:1630738557940098034 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.13\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-28T04:07:30.302481Z","caller":"traceutil/trace.go:171","msg":"trace[1668307054] transaction","detail":"{read_only:false; response_revision:1036; number_of_response:1; }","duration":"646.462034ms","start":"2023-11-28T04:07:29.655976Z","end":"2023-11-28T04:07:30.302438Z","steps":["trace[1668307054] 'process raft request'  (duration: 646.366836ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T04:07:30.302609Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T04:07:29.655959Z","time spent":"646.591503ms","remote":"127.0.0.1:58270","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1034 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2023-11-28T04:07:30.302833Z","caller":"traceutil/trace.go:171","msg":"trace[1981600562] transaction","detail":"{read_only:false; response_revision:1035; number_of_response:1; }","duration":"652.035167ms","start":"2023-11-28T04:07:29.650782Z","end":"2023-11-28T04:07:30.302817Z","steps":["trace[1981600562] 'process raft request'  (duration: 124.469731ms)","trace[1981600562] 'compare'  (duration: 526.381329ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-28T04:07:30.302912Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T04:07:29.650766Z","time spent":"652.105813ms","remote":"127.0.0.1:58240","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":118,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/masterleases/192.168.61.13\" mod_revision:1028 > success:<request_put:<key:\"/registry/masterleases/192.168.61.13\" value_size:66 lease:1630738557940098034 >> failure:<request_range:<key:\"/registry/masterleases/192.168.61.13\" > >"}
	{"level":"warn","ts":"2023-11-28T04:07:30.990811Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"432.694362ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2023-11-28T04:07:30.990905Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.461575ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1630738557940098043 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-mdjdvftesfkysykllfzksu6t4i\" mod_revision:1029 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-mdjdvftesfkysykllfzksu6t4i\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-mdjdvftesfkysykllfzksu6t4i\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-11-28T04:07:30.99101Z","caller":"traceutil/trace.go:171","msg":"trace[1434998855] transaction","detail":"{read_only:false; response_revision:1037; number_of_response:1; }","duration":"396.403208ms","start":"2023-11-28T04:07:30.594594Z","end":"2023-11-28T04:07:30.990997Z","steps":["trace[1434998855] 'process raft request'  (duration: 265.790792ms)","trace[1434998855] 'compare'  (duration: 129.84135ms)"],"step_count":2}
	{"level":"warn","ts":"2023-11-28T04:07:30.99107Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T04:07:30.594573Z","time spent":"396.471652ms","remote":"127.0.0.1:58292","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":693,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-mdjdvftesfkysykllfzksu6t4i\" mod_revision:1029 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-mdjdvftesfkysykllfzksu6t4i\" value_size:620 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-mdjdvftesfkysykllfzksu6t4i\" > >"}
	{"level":"info","ts":"2023-11-28T04:07:30.990922Z","caller":"traceutil/trace.go:171","msg":"trace[1968723066] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1036; }","duration":"432.815922ms","start":"2023-11-28T04:07:30.558094Z","end":"2023-11-28T04:07:30.99091Z","steps":["trace[1968723066] 'range keys from in-memory index tree'  (duration: 432.628464ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T04:07:30.991246Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-11-28T04:07:30.558082Z","time spent":"433.145761ms","remote":"127.0.0.1:58274","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":29,"request content":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" "}
	{"level":"info","ts":"2023-11-28T04:07:56.45086Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":816}
	{"level":"info","ts":"2023-11-28T04:07:56.45915Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":816,"took":"7.958756ms","hash":2648089177}
	{"level":"info","ts":"2023-11-28T04:07:56.459226Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2648089177,"revision":816,"compact-revision":-1}
	{"level":"info","ts":"2023-11-28T04:12:56.459462Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1057}
	{"level":"info","ts":"2023-11-28T04:12:56.461443Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1057,"took":"1.650228ms","hash":2490332645}
	{"level":"info","ts":"2023-11-28T04:12:56.46151Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2490332645,"revision":1057,"compact-revision":816}
	
	* 
	* ==> kernel <==
	*  04:17:45 up 20 min,  0 users,  load average: 0.00, 0.08, 0.12
	Linux default-k8s-diff-port-725962 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] <==
	* W1128 04:12:59.458760       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:12:59.458891       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:12:59.458998       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:12:59.458828       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:12:59.459162       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:12:59.460543       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:13:58.266225       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 04:13:59.459170       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:13:59.459236       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:13:59.459248       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:13:59.461665       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:13:59.461776       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:13:59.461786       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:14:58.266234       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1128 04:15:58.266516       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 04:15:59.460408       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:15:59.460496       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:15:59.460504       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:15:59.462905       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:15:59.463093       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:15:59.463132       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:16:58.265758       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] <==
	* I1128 04:12:11.596462       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:12:40.952430       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:12:41.606474       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:13:10.958729       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:13:11.617658       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:13:40.966933       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:13:41.626541       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:14:10.973206       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:14:11.636066       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1128 04:14:29.593056       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="338.542µs"
	E1128 04:14:40.979401       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:14:41.645584       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1128 04:14:44.592424       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="317.068µs"
	E1128 04:15:10.986110       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:15:11.655173       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:15:40.992910       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:15:41.667149       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:16:10.999838       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:16:11.676531       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:16:41.005430       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:16:41.686147       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:17:11.012423       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:17:11.695228       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:17:41.017799       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:17:41.705901       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] <==
	* I1128 03:58:00.338942       1 server_others.go:69] "Using iptables proxy"
	I1128 03:58:00.367046       1 node.go:141] Successfully retrieved node IP: 192.168.61.13
	I1128 03:58:00.512269       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1128 03:58:00.512412       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 03:58:00.521038       1 server_others.go:152] "Using iptables Proxier"
	I1128 03:58:00.521136       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 03:58:00.521570       1 server.go:846] "Version info" version="v1.28.4"
	I1128 03:58:00.521872       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 03:58:00.524113       1 config.go:188] "Starting service config controller"
	I1128 03:58:00.524173       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 03:58:00.524219       1 config.go:97] "Starting endpoint slice config controller"
	I1128 03:58:00.524241       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 03:58:00.524926       1 config.go:315] "Starting node config controller"
	I1128 03:58:00.524976       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 03:58:00.624782       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1128 03:58:00.625004       1 shared_informer.go:318] Caches are synced for service config
	I1128 03:58:00.625434       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] <==
	* I1128 03:57:55.267632       1 serving.go:348] Generated self-signed cert in-memory
	W1128 03:57:58.371890       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1128 03:57:58.371954       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 03:57:58.371972       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1128 03:57:58.371982       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1128 03:57:58.472700       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I1128 03:57:58.472777       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 03:57:58.492420       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1128 03:57:58.492490       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1128 03:57:58.496417       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1128 03:57:58.496551       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1128 03:57:58.593814       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 03:57:20 UTC, ends at Tue 2023-11-28 04:17:45 UTC. --
	Nov 28 04:14:51 default-k8s-diff-port-725962 kubelet[904]: E1128 04:14:51.586672     904 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:14:51 default-k8s-diff-port-725962 kubelet[904]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:14:51 default-k8s-diff-port-725962 kubelet[904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:14:51 default-k8s-diff-port-725962 kubelet[904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:14:56 default-k8s-diff-port-725962 kubelet[904]: E1128 04:14:56.571689     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:15:09 default-k8s-diff-port-725962 kubelet[904]: E1128 04:15:09.573196     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:15:24 default-k8s-diff-port-725962 kubelet[904]: E1128 04:15:24.572198     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:15:36 default-k8s-diff-port-725962 kubelet[904]: E1128 04:15:36.573000     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:15:50 default-k8s-diff-port-725962 kubelet[904]: E1128 04:15:50.572151     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:15:51 default-k8s-diff-port-725962 kubelet[904]: E1128 04:15:51.586070     904 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:15:51 default-k8s-diff-port-725962 kubelet[904]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:15:51 default-k8s-diff-port-725962 kubelet[904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:15:51 default-k8s-diff-port-725962 kubelet[904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:16:04 default-k8s-diff-port-725962 kubelet[904]: E1128 04:16:04.572649     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:16:17 default-k8s-diff-port-725962 kubelet[904]: E1128 04:16:17.572848     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:16:32 default-k8s-diff-port-725962 kubelet[904]: E1128 04:16:32.573006     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:16:46 default-k8s-diff-port-725962 kubelet[904]: E1128 04:16:46.572362     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:16:51 default-k8s-diff-port-725962 kubelet[904]: E1128 04:16:51.587486     904 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:16:51 default-k8s-diff-port-725962 kubelet[904]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:16:51 default-k8s-diff-port-725962 kubelet[904]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:16:51 default-k8s-diff-port-725962 kubelet[904]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:16:59 default-k8s-diff-port-725962 kubelet[904]: E1128 04:16:59.572557     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:17:10 default-k8s-diff-port-725962 kubelet[904]: E1128 04:17:10.572241     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:17:25 default-k8s-diff-port-725962 kubelet[904]: E1128 04:17:25.573257     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	Nov 28 04:17:39 default-k8s-diff-port-725962 kubelet[904]: E1128 04:17:39.573490     904 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-9bqg8" podUID="48d11dc2-ea03-4b2d-ac8b-afa0c6273c80"
	
	* 
	* ==> storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] <==
	* I1128 03:58:31.098141       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 03:58:31.119849       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 03:58:31.120383       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 03:58:48.529149       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 03:58:48.530235       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-725962_01718ff6-75eb-4d16-9ec2-d5670481b48a!
	I1128 03:58:48.531896       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b615493e-abbc-4088-a40d-dcb3a179f972", APIVersion:"v1", ResourceVersion:"599", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-725962_01718ff6-75eb-4d16-9ec2-d5670481b48a became leader
	I1128 03:58:48.630632       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-725962_01718ff6-75eb-4d16-9ec2-d5670481b48a!
	
	* 
	* ==> storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] <==
	* I1128 03:58:00.330872       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1128 03:58:30.335991       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-725962 -n default-k8s-diff-port-725962
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-725962 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-9bqg8
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-725962 describe pod metrics-server-57f55c9bc5-9bqg8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-725962 describe pod metrics-server-57f55c9bc5-9bqg8: exit status 1 (67.968684ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-9bqg8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-725962 describe pod metrics-server-57f55c9bc5-9bqg8: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (378.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (342.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1128 04:11:58.569280  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 04:12:05.902751  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-222348 -n no-preload-222348
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-28 04:17:30.152567513 +0000 UTC m=+5797.327541658
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-222348 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-222348 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.091µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-222348 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-222348 -n no-preload-222348
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-222348 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-222348 logs -n 25: (1.308394096s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-222348             | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-725962  | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-666657             | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-666657                              | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC | 28 Nov 23 04:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-644411                  | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-644411 --memory=2200 --alsologtostderr   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 03:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-222348                  | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-725962       | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-644411 sudo                              | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p                                                     | disable-driver-mounts-846967 | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | disable-driver-mounts-846967                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-672176            | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC | 28 Nov 23 03:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-672176                 | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC | 28 Nov 23 04:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-666657                              | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 04:16 UTC | 28 Nov 23 04:16 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:02:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:02:20.007599  388252 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:02:20.007767  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.007777  388252 out.go:309] Setting ErrFile to fd 2...
	I1128 04:02:20.007785  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.008096  388252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 04:02:20.008843  388252 out.go:303] Setting JSON to false
	I1128 04:02:20.010310  388252 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9890,"bootTime":1701134250,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 04:02:20.010407  388252 start.go:138] virtualization: kvm guest
	I1128 04:02:20.013087  388252 out.go:177] * [embed-certs-672176] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 04:02:20.014598  388252 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:02:20.014660  388252 notify.go:220] Checking for updates...
	I1128 04:02:20.015986  388252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:02:20.017211  388252 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:20.018519  388252 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 04:02:20.019955  388252 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 04:02:20.021210  388252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:02:20.023191  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:02:20.023899  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.023964  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.042617  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
	I1128 04:02:20.043095  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.043705  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.043736  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.044107  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.044324  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.044601  388252 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:02:20.044913  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.044954  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.060572  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I1128 04:02:20.061089  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.061641  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.061662  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.062005  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.062271  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.099905  388252 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 04:02:20.101319  388252 start.go:298] selected driver: kvm2
	I1128 04:02:20.101341  388252 start.go:902] validating driver "kvm2" against &{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.101493  388252 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:02:20.102582  388252 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.102689  388252 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 04:02:20.119550  388252 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 04:02:20.120061  388252 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 04:02:20.120161  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:02:20.120182  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:20.120200  388252 start_flags.go:323] config:
	{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.120453  388252 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.122000  388252 out.go:177] * Starting control plane node embed-certs-672176 in cluster embed-certs-672176
	I1128 04:02:20.123169  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:02:20.123226  388252 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 04:02:20.123238  388252 cache.go:56] Caching tarball of preloaded images
	I1128 04:02:20.123336  388252 preload.go:174] Found /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 04:02:20.123349  388252 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 04:02:20.123483  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:02:20.123764  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:02:20.123841  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 53.317µs
	I1128 04:02:20.123861  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:02:20.123898  388252 fix.go:54] fixHost starting: 
	I1128 04:02:20.124308  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.124355  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.139372  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I1128 04:02:20.139973  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.140502  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.140524  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.141047  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.141273  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.141507  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:02:20.143177  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Running err=<nil>
	W1128 04:02:20.143200  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:02:20.144930  388252 out.go:177] * Updating the running kvm2 "embed-certs-672176" VM ...
	I1128 04:02:17.125019  385277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:17.142364  385277 api_server.go:72] duration metric: took 4m14.849353437s to wait for apiserver process to appear ...
	I1128 04:02:17.142392  385277 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:17.142425  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:17.142480  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:17.183951  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:17.183975  385277 cri.go:89] found id: ""
	I1128 04:02:17.183984  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:17.184035  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.188897  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:17.188968  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:17.224077  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:17.224105  385277 cri.go:89] found id: ""
	I1128 04:02:17.224115  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:17.224171  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.228613  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:17.228693  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:17.263866  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:17.263895  385277 cri.go:89] found id: ""
	I1128 04:02:17.263906  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:17.263973  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.268122  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:17.268187  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:17.311145  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.311176  385277 cri.go:89] found id: ""
	I1128 04:02:17.311185  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:17.311245  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.315277  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:17.315355  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:17.352737  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:17.352763  385277 cri.go:89] found id: ""
	I1128 04:02:17.352773  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:17.352839  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.357033  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:17.357117  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:17.394844  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:17.394880  385277 cri.go:89] found id: ""
	I1128 04:02:17.394892  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:17.394949  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.399309  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:17.399382  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:17.441719  385277 cri.go:89] found id: ""
	I1128 04:02:17.441755  385277 logs.go:284] 0 containers: []
	W1128 04:02:17.441763  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:17.441769  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:17.441821  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:17.485353  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:17.485378  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:17.485383  385277 cri.go:89] found id: ""
	I1128 04:02:17.485391  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:17.485445  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.489781  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.493710  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:17.493734  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:17.552558  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:17.552596  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:17.570454  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:17.570484  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.617817  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:17.617855  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:18.071032  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:18.071076  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:18.188437  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:18.188477  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:18.246729  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:18.246777  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:18.287299  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:18.287345  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:18.324855  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:18.324903  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:18.378328  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:18.378370  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:18.421332  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:18.421375  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:18.467856  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:18.467905  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:18.528763  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:18.528817  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:19.035039  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:21.037085  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:23.535684  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:20.146477  388252 machine.go:88] provisioning docker machine ...
	I1128 04:02:20.146512  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.146758  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.146926  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:02:20.146949  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.147164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:02:20.150346  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.150885  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 04:58:10 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:02:20.150920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.151194  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:02:20.151404  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151768  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:02:20.151998  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:02:20.152482  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:02:20.152501  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:02:23.005224  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:21.087291  385277 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8444/healthz ...
	I1128 04:02:21.094451  385277 api_server.go:279] https://192.168.61.13:8444/healthz returned 200:
	ok
	I1128 04:02:21.096308  385277 api_server.go:141] control plane version: v1.28.4
	I1128 04:02:21.096333  385277 api_server.go:131] duration metric: took 3.953933505s to wait for apiserver health ...
	I1128 04:02:21.096343  385277 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:21.096371  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:21.096431  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:21.144869  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:21.144908  385277 cri.go:89] found id: ""
	I1128 04:02:21.144920  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:21.144987  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.149714  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:21.149790  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:21.192196  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.192230  385277 cri.go:89] found id: ""
	I1128 04:02:21.192242  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:21.192307  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.196964  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:21.197040  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:21.234749  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:21.234775  385277 cri.go:89] found id: ""
	I1128 04:02:21.234785  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:21.234845  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.239486  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:21.239574  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:21.275950  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.275980  385277 cri.go:89] found id: ""
	I1128 04:02:21.275991  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:21.276069  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.280518  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:21.280591  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:21.325941  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:21.325967  385277 cri.go:89] found id: ""
	I1128 04:02:21.325977  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:21.326038  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.330959  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:21.331031  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:21.376605  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.376636  385277 cri.go:89] found id: ""
	I1128 04:02:21.376648  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:21.376717  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.382609  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:21.382686  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:21.434065  385277 cri.go:89] found id: ""
	I1128 04:02:21.434102  385277 logs.go:284] 0 containers: []
	W1128 04:02:21.434113  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:21.434121  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:21.434191  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:21.475230  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.475265  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.475272  385277 cri.go:89] found id: ""
	I1128 04:02:21.475300  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:21.475367  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.479918  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.483989  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:21.484014  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.550040  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:21.550086  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.604802  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:21.604854  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:21.667187  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:21.667230  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:21.735542  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:21.735591  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.778554  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:21.778600  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.841737  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:21.841776  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.885454  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:21.885494  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:22.264498  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:22.264545  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:22.281694  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:22.281727  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:22.441500  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:22.441548  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:22.516971  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:22.517015  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:22.570642  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:22.570676  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:25.123556  385277 system_pods.go:59] 8 kube-system pods found
	I1128 04:02:25.123590  385277 system_pods.go:61] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.123595  385277 system_pods.go:61] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.123600  385277 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.123604  385277 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.123608  385277 system_pods.go:61] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.123613  385277 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.123620  385277 system_pods.go:61] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.123626  385277 system_pods.go:61] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.123633  385277 system_pods.go:74] duration metric: took 4.027284696s to wait for pod list to return data ...
	I1128 04:02:25.123641  385277 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:25.127575  385277 default_sa.go:45] found service account: "default"
	I1128 04:02:25.127601  385277 default_sa.go:55] duration metric: took 3.954108ms for default service account to be created ...
	I1128 04:02:25.127611  385277 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:25.136183  385277 system_pods.go:86] 8 kube-system pods found
	I1128 04:02:25.136217  385277 system_pods.go:89] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.136224  385277 system_pods.go:89] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.136232  385277 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.136240  385277 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.136246  385277 system_pods.go:89] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.136253  385277 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.136266  385277 system_pods.go:89] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.136280  385277 system_pods.go:89] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.136291  385277 system_pods.go:126] duration metric: took 8.673655ms to wait for k8s-apps to be running ...
	I1128 04:02:25.136303  385277 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:25.136362  385277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:25.158811  385277 system_svc.go:56] duration metric: took 22.495299ms WaitForService to wait for kubelet.
	I1128 04:02:25.158862  385277 kubeadm.go:581] duration metric: took 4m22.865858856s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:25.158891  385277 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:25.162679  385277 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:25.162706  385277 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:25.162717  385277 node_conditions.go:105] duration metric: took 3.821419ms to run NodePressure ...
	I1128 04:02:25.162745  385277 start.go:228] waiting for startup goroutines ...
	I1128 04:02:25.162751  385277 start.go:233] waiting for cluster config update ...
	I1128 04:02:25.162760  385277 start.go:242] writing updated cluster config ...
	I1128 04:02:25.163075  385277 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:25.217545  385277 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:02:25.219820  385277 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-725962" cluster and "default" namespace by default
	I1128 04:02:28.624093  385190 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.0
	I1128 04:02:28.624173  385190 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:02:28.624301  385190 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:02:28.624444  385190 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:02:28.624561  385190 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:02:28.624641  385190 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:02:28.626365  385190 out.go:204]   - Generating certificates and keys ...
	I1128 04:02:28.626465  385190 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:02:28.626548  385190 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:02:28.626645  385190 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:02:28.626719  385190 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:02:28.626828  385190 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:02:28.626908  385190 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:02:28.626985  385190 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:02:28.627057  385190 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:02:28.627166  385190 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:02:28.627259  385190 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:02:28.627315  385190 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:02:28.627384  385190 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:02:28.627442  385190 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:02:28.627513  385190 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1128 04:02:28.627573  385190 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:02:28.627653  385190 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:02:28.627717  385190 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:02:28.627821  385190 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:02:28.627901  385190 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:02:28.629387  385190 out.go:204]   - Booting up control plane ...
	I1128 04:02:28.629496  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:02:28.629593  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:02:28.629701  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:02:28.629825  385190 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:02:28.629933  385190 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:02:28.629985  385190 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:02:28.630182  385190 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:02:28.630292  385190 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502940 seconds
	I1128 04:02:28.630437  385190 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:02:28.630586  385190 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:02:28.630656  385190 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:02:28.630869  385190 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-222348 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:02:28.630937  385190 kubeadm.go:322] [bootstrap-token] Using token: 7e8qc3.nnytwd8q8fl84l6i
	I1128 04:02:28.632838  385190 out.go:204]   - Configuring RBAC rules ...
	I1128 04:02:28.632987  385190 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:02:28.633108  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:02:28.633273  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:02:28.633455  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:02:28.633635  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:02:28.633737  385190 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:02:28.633909  385190 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:02:28.633964  385190 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:02:28.634003  385190 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:02:28.634009  385190 kubeadm.go:322] 
	I1128 04:02:28.634063  385190 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:02:28.634070  385190 kubeadm.go:322] 
	I1128 04:02:28.634130  385190 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:02:28.634136  385190 kubeadm.go:322] 
	I1128 04:02:28.634157  385190 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:02:28.634205  385190 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:02:28.634250  385190 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:02:28.634256  385190 kubeadm.go:322] 
	I1128 04:02:28.634333  385190 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:02:28.634349  385190 kubeadm.go:322] 
	I1128 04:02:28.634438  385190 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:02:28.634462  385190 kubeadm.go:322] 
	I1128 04:02:28.634525  385190 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:02:28.634659  385190 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:02:28.634759  385190 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:02:28.634773  385190 kubeadm.go:322] 
	I1128 04:02:28.634879  385190 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:02:28.634957  385190 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:02:28.634965  385190 kubeadm.go:322] 
	I1128 04:02:28.635041  385190 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635153  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:02:28.635188  385190 kubeadm.go:322] 	--control-plane 
	I1128 04:02:28.635197  385190 kubeadm.go:322] 
	I1128 04:02:28.635304  385190 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:02:28.635313  385190 kubeadm.go:322] 
	I1128 04:02:28.635411  385190 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635541  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:02:28.635574  385190 cni.go:84] Creating CNI manager for ""
	I1128 04:02:28.635588  385190 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:28.637435  385190 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:02:28.638928  385190 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:02:25.536491  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:28.037478  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:26.077199  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:28.654704  385190 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:02:28.714435  385190 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:02:28.714516  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.714524  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=no-preload-222348 minikube.k8s.io/updated_at=2023_11_28T04_02_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.790761  385190 ops.go:34] apiserver oom_adj: -16
	I1128 04:02:28.965788  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.082351  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.680586  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.181037  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.680560  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.181252  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.680411  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.180401  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.681195  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:33.180867  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.535026  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.536808  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.161184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:33.680538  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.180615  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.680359  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.180746  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.681099  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.180588  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.681059  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.180397  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.680629  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:38.180710  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.036694  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:37.535611  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:35.229145  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:38.681268  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.180491  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.680634  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.180761  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.681057  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.180983  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.309439  385190 kubeadm.go:1081] duration metric: took 12.594981015s to wait for elevateKubeSystemPrivileges.
	I1128 04:02:41.309479  385190 kubeadm.go:406] StartCluster complete in 5m13.943228432s
	I1128 04:02:41.309503  385190 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.309588  385190 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:41.311897  385190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.312215  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:02:41.312322  385190 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:02:41.312407  385190 addons.go:69] Setting storage-provisioner=true in profile "no-preload-222348"
	I1128 04:02:41.312422  385190 addons.go:69] Setting default-storageclass=true in profile "no-preload-222348"
	I1128 04:02:41.312436  385190 addons.go:231] Setting addon storage-provisioner=true in "no-preload-222348"
	I1128 04:02:41.312438  385190 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-222348"
	W1128 04:02:41.312445  385190 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:02:41.312446  385190 addons.go:69] Setting metrics-server=true in profile "no-preload-222348"
	I1128 04:02:41.312462  385190 config.go:182] Loaded profile config "no-preload-222348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 04:02:41.312475  385190 addons.go:231] Setting addon metrics-server=true in "no-preload-222348"
	W1128 04:02:41.312485  385190 addons.go:240] addon metrics-server should already be in state true
	I1128 04:02:41.312510  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312537  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312960  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312985  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.328695  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I1128 04:02:41.328709  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44013
	I1128 04:02:41.328795  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I1128 04:02:41.332632  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332652  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332640  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.333191  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333213  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333323  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333340  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333358  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333344  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333610  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333774  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333826  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.334168  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334182  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.334399  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.334587  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334602  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.338095  385190 addons.go:231] Setting addon default-storageclass=true in "no-preload-222348"
	W1128 04:02:41.338117  385190 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:02:41.338150  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.338562  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.338582  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.351757  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I1128 04:02:41.352462  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.353001  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.353018  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.353432  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.353689  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.354246  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1128 04:02:41.354837  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.355324  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.355342  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.355772  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.356535  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.356577  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.356832  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33321
	I1128 04:02:41.357390  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.357499  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.359297  385190 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:02:41.357865  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.360511  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.360704  385190 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.360715  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:02:41.360729  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.361075  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.361268  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.363830  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.365783  385190 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:02:41.364607  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.365384  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.367315  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:02:41.367328  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:02:41.367348  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.367398  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.367414  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.367426  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.368068  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.368272  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.370196  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370716  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.370740  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370820  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.371038  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.371144  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.371280  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.374445  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1128 04:02:41.374734  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.375079  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.375089  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.375305  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.375403  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.376672  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.376916  385190 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.376931  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:02:41.376944  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.379448  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379800  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.379839  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379946  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.380070  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.380154  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.380223  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.388696  385190 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-222348" context rescaled to 1 replicas
	I1128 04:02:41.388733  385190 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:02:41.390613  385190 out.go:177] * Verifying Kubernetes components...
	I1128 04:02:41.391975  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:41.644941  385190 node_ready.go:35] waiting up to 6m0s for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.645100  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:02:41.665031  385190 node_ready.go:49] node "no-preload-222348" has status "Ready":"True"
	I1128 04:02:41.665067  385190 node_ready.go:38] duration metric: took 20.088639ms waiting for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.665082  385190 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:41.682673  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:41.759560  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:02:41.759595  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:02:41.905887  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.922496  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.955296  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:02:41.955331  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:02:42.013986  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.014023  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:02:42.104936  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.373507  385190 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 04:02:43.023075  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117131952s)
	I1128 04:02:43.023099  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.100573063s)
	I1128 04:02:43.023137  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023153  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023217  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023235  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023471  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023491  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023502  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023510  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023615  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023659  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023682  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023704  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023724  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023704  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023898  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023917  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.116124  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.116162  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.116591  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.116636  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.116648  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.309617  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.204630924s)
	I1128 04:02:43.309676  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.309689  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310010  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310031  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310043  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.310051  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310313  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310331  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310343  385190 addons.go:467] Verifying addon metrics-server=true in "no-preload-222348"
	I1128 04:02:43.312005  385190 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 04:02:43.313519  385190 addons.go:502] enable addons completed in 2.001198411s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 04:02:39.536572  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:42.036107  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:41.309196  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:44.385117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:43.735794  385190 pod_ready.go:102] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:45.228427  385190 pod_ready.go:92] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.228457  385190 pod_ready.go:81] duration metric: took 3.545740844s waiting for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.228470  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234714  385190 pod_ready.go:92] pod "coredns-76f75df574-nxnkf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.234747  385190 pod_ready.go:81] duration metric: took 6.268663ms waiting for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234767  385190 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240363  385190 pod_ready.go:92] pod "etcd-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.240386  385190 pod_ready.go:81] duration metric: took 5.606452ms waiting for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240397  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245748  385190 pod_ready.go:92] pod "kube-apiserver-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.245774  385190 pod_ready.go:81] duration metric: took 5.367922ms waiting for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245786  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251475  385190 pod_ready.go:92] pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.251498  385190 pod_ready.go:81] duration metric: took 5.703821ms waiting for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251506  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050247  385190 pod_ready.go:92] pod "kube-proxy-2cf7h" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.050276  385190 pod_ready.go:81] duration metric: took 798.763018ms waiting for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050285  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448834  385190 pod_ready.go:92] pod "kube-scheduler-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.448860  385190 pod_ready.go:81] duration metric: took 398.568611ms waiting for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448867  385190 pod_ready.go:38] duration metric: took 4.783773086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:46.448903  385190 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:02:46.448956  385190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:46.462941  385190 api_server.go:72] duration metric: took 5.074163925s to wait for apiserver process to appear ...
	I1128 04:02:46.463051  385190 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:46.463074  385190 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I1128 04:02:46.467657  385190 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I1128 04:02:46.468866  385190 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 04:02:46.468903  385190 api_server.go:131] duration metric: took 5.843376ms to wait for apiserver health ...
	I1128 04:02:46.468913  385190 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:46.655554  385190 system_pods.go:59] 9 kube-system pods found
	I1128 04:02:46.655587  385190 system_pods.go:61] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:46.655591  385190 system_pods.go:61] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:46.655595  385190 system_pods.go:61] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:46.655600  385190 system_pods.go:61] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:46.655605  385190 system_pods.go:61] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:46.655608  385190 system_pods.go:61] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:46.655612  385190 system_pods.go:61] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:46.655619  385190 system_pods.go:61] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:46.655623  385190 system_pods.go:61] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:46.655631  385190 system_pods.go:74] duration metric: took 186.709524ms to wait for pod list to return data ...
	I1128 04:02:46.655640  385190 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:46.849175  385190 default_sa.go:45] found service account: "default"
	I1128 04:02:46.849211  385190 default_sa.go:55] duration metric: took 193.561736ms for default service account to be created ...
	I1128 04:02:46.849224  385190 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:47.053165  385190 system_pods.go:86] 9 kube-system pods found
	I1128 04:02:47.053196  385190 system_pods.go:89] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:47.053202  385190 system_pods.go:89] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:47.053206  385190 system_pods.go:89] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:47.053210  385190 system_pods.go:89] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:47.053215  385190 system_pods.go:89] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:47.053219  385190 system_pods.go:89] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:47.053223  385190 system_pods.go:89] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:47.053230  385190 system_pods.go:89] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:47.053234  385190 system_pods.go:89] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:47.053244  385190 system_pods.go:126] duration metric: took 204.014035ms to wait for k8s-apps to be running ...
	I1128 04:02:47.053258  385190 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:47.053305  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:47.067411  385190 system_svc.go:56] duration metric: took 14.14274ms WaitForService to wait for kubelet.
	I1128 04:02:47.067436  385190 kubeadm.go:581] duration metric: took 5.678670521s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:47.067453  385190 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:47.249281  385190 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:47.249314  385190 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:47.249327  385190 node_conditions.go:105] duration metric: took 181.869484ms to run NodePressure ...
	I1128 04:02:47.249343  385190 start.go:228] waiting for startup goroutines ...
	I1128 04:02:47.249351  385190 start.go:233] waiting for cluster config update ...
	I1128 04:02:47.249363  385190 start.go:242] writing updated cluster config ...
	I1128 04:02:47.249683  385190 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:47.301859  385190 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.0 (minor skew: 1)
	I1128 04:02:47.304215  385190 out.go:177] * Done! kubectl is now configured to use "no-preload-222348" cluster and "default" namespace by default
	I1128 04:02:44.036258  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:46.535320  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:49.035723  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:51.036414  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.538606  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.501130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:56.038018  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:58.038148  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:56.573082  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:00.535454  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.536429  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.657139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:05.035677  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:07.535352  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:05.725166  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:10.035343  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:11.229133  384793 pod_ready.go:81] duration metric: took 4m0.000747713s waiting for pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:11.229186  384793 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 04:03:11.229223  384793 pod_ready.go:38] duration metric: took 4m1.198355321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:11.229295  384793 kubeadm.go:640] restartCluster took 5m7.227749733s
	W1128 04:03:11.229381  384793 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 04:03:11.229418  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 04:03:11.809110  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:14.877214  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:17.718633  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.489183339s)
	I1128 04:03:17.718715  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:17.739229  384793 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:03:17.757193  384793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:03:17.767831  384793 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:03:17.767891  384793 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1128 04:03:17.992007  384793 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:03:20.961191  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:24.033147  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:31.044187  384793 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1128 04:03:31.044276  384793 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:03:31.044375  384793 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:03:31.044493  384793 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:03:31.044609  384793 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:03:31.044732  384793 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:03:31.044843  384793 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:03:31.044947  384793 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1128 04:03:31.045000  384793 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:03:31.046699  384793 out.go:204]   - Generating certificates and keys ...
	I1128 04:03:31.046809  384793 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:03:31.046903  384793 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:03:31.047016  384793 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:03:31.047101  384793 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:03:31.047160  384793 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:03:31.047208  384793 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:03:31.047264  384793 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:03:31.047314  384793 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:03:31.047377  384793 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:03:31.047482  384793 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:03:31.047529  384793 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:03:31.047578  384793 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:03:31.047620  384793 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:03:31.047694  384793 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:03:31.047788  384793 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:03:31.047884  384793 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:03:31.047988  384793 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:03:31.049345  384793 out.go:204]   - Booting up control plane ...
	I1128 04:03:31.049473  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:03:31.049569  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:03:31.049662  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:03:31.049788  384793 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:03:31.049994  384793 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:03:31.050107  384793 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503287 seconds
	I1128 04:03:31.050234  384793 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:03:31.050420  384793 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:03:31.050527  384793 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:03:31.050654  384793 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-666657 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1128 04:03:31.050713  384793 kubeadm.go:322] [bootstrap-token] Using token: gf7r1p.pbcguwte29lkqg9w
	I1128 04:03:31.052000  384793 out.go:204]   - Configuring RBAC rules ...
	I1128 04:03:31.052092  384793 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:03:31.052210  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:03:31.052320  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:03:31.052413  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:03:31.052483  384793 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:03:31.052536  384793 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:03:31.052597  384793 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:03:31.052606  384793 kubeadm.go:322] 
	I1128 04:03:31.052674  384793 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:03:31.052686  384793 kubeadm.go:322] 
	I1128 04:03:31.052781  384793 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:03:31.052797  384793 kubeadm.go:322] 
	I1128 04:03:31.052818  384793 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:03:31.052928  384793 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:03:31.052973  384793 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:03:31.052982  384793 kubeadm.go:322] 
	I1128 04:03:31.053023  384793 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:03:31.053088  384793 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:03:31.053143  384793 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:03:31.053150  384793 kubeadm.go:322] 
	I1128 04:03:31.053220  384793 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1128 04:03:31.053286  384793 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:03:31.053292  384793 kubeadm.go:322] 
	I1128 04:03:31.053381  384793 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053534  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:03:31.053573  384793 kubeadm.go:322]     --control-plane 	  
	I1128 04:03:31.053582  384793 kubeadm.go:322] 
	I1128 04:03:31.053693  384793 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:03:31.053705  384793 kubeadm.go:322] 
	I1128 04:03:31.053806  384793 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053946  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:03:31.053966  384793 cni.go:84] Creating CNI manager for ""
	I1128 04:03:31.053976  384793 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:03:31.055505  384793 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:03:31.057142  384793 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:03:31.079411  384793 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:03:31.115893  384793 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:03:31.115971  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.115980  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=old-k8s-version-666657 minikube.k8s.io/updated_at=2023_11_28T04_03_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.155887  384793 ops.go:34] apiserver oom_adj: -16
	I1128 04:03:31.372659  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.491129  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.099198  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.598840  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.099309  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.599526  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:30.109176  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:33.181170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:34.099192  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:34.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.098837  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.599080  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.098595  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.599209  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.099078  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.599225  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.099115  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.599148  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.261149  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:39.099036  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.599363  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.099099  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.598700  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.099170  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.599370  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.099044  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.098743  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.599233  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.333168  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:44.099079  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:44.598797  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.098959  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.598648  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.098995  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.301569  384793 kubeadm.go:1081] duration metric: took 15.185662789s to wait for elevateKubeSystemPrivileges.
	I1128 04:03:46.301619  384793 kubeadm.go:406] StartCluster complete in 5m42.369662329s
	I1128 04:03:46.301646  384793 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.301755  384793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:03:46.304463  384793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.304778  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:03:46.304778  384793 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:03:46.304867  384793 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304898  384793 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304910  384793 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-666657"
	I1128 04:03:46.304911  384793 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-666657"
	W1128 04:03:46.304920  384793 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:03:46.304927  384793 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-666657"
	W1128 04:03:46.304935  384793 addons.go:240] addon metrics-server should already be in state true
	I1128 04:03:46.304934  384793 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-666657"
	I1128 04:03:46.304987  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.304988  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.305001  384793 config.go:182] Loaded profile config "old-k8s-version-666657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 04:03:46.305394  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305427  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305454  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305429  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305395  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305694  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.322961  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33891
	I1128 04:03:46.322979  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I1128 04:03:46.323376  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323388  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323820  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I1128 04:03:46.323904  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.323916  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324071  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324086  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324273  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.324410  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324528  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324590  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.324704  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324711  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.325059  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.325278  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325304  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.325499  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325519  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.328349  384793 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-666657"
	W1128 04:03:46.328365  384793 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:03:46.328393  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.328731  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.328750  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.342280  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45973
	I1128 04:03:46.343025  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.343737  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.343759  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.344269  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.344492  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.345036  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39033
	I1128 04:03:46.345665  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.346273  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.346301  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.346384  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.348493  384793 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:03:46.346866  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.349948  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:03:46.349966  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:03:46.349989  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.350099  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.352330  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.352432  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36429
	I1128 04:03:46.354071  384793 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:03:46.352959  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.354459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355328  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.355358  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355480  384793 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.355501  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:03:46.355518  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.355216  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.355803  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.356414  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.356435  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.356917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.357018  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.357108  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.357738  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.357769  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.358467  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.358922  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.358946  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.359072  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.359282  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.359403  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.359610  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.373628  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1128 04:03:46.374105  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.374866  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.374895  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.375314  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.375548  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.377265  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.377561  384793 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.377582  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:03:46.377603  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.380459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.380834  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.380864  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.381016  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.381169  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.381359  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.381466  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.409792  384793 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-666657" context rescaled to 1 replicas
	I1128 04:03:46.409842  384793 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:03:46.411454  384793 out.go:177] * Verifying Kubernetes components...
	I1128 04:03:46.413194  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:46.586767  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.631269  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.634383  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:03:46.634407  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:03:46.666152  384793 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.666176  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:03:46.674225  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:03:46.674248  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:03:46.713431  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:46.713461  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:03:46.793657  384793 node_ready.go:49] node "old-k8s-version-666657" has status "Ready":"True"
	I1128 04:03:46.793685  384793 node_ready.go:38] duration metric: took 127.497314ms waiting for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.793695  384793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:46.793699  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:47.263395  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:47.404099  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404139  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404445  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404485  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.404487  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:47.404506  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404519  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404786  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404809  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434537  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.434567  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.434929  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.434986  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434965  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.447368  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.816042626s)
	I1128 04:03:48.447386  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.781104735s)
	I1128 04:03:48.447415  384793 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1128 04:03:48.447423  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.447803  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.447818  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.447828  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447836  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.448143  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.448144  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.448166  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.746828  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.953085214s)
	I1128 04:03:48.746898  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.746917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747352  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.747378  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.747396  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.747420  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.747437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747692  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.749007  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.749027  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.749045  384793 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-666657"
	I1128 04:03:48.750820  384793 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 04:03:48.417150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:48.752378  384793 addons.go:502] enable addons completed in 2.447603022s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 04:03:49.504435  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.973968  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.485111  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:53.973462  384793 pod_ready.go:92] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.973491  384793 pod_ready.go:81] duration metric: took 6.710064476s waiting for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.973504  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.975383  384793 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975413  384793 pod_ready.go:81] duration metric: took 1.901164ms waiting for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:53.975426  384793 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975437  384793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980213  384793 pod_ready.go:92] pod "kube-proxy-fpjnf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.980239  384793 pod_ready.go:81] duration metric: took 4.79365ms waiting for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980249  384793 pod_ready.go:38] duration metric: took 7.186544585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:53.980270  384793 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:03:53.980322  384793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:03:53.995392  384793 api_server.go:72] duration metric: took 7.585507425s to wait for apiserver process to appear ...
	I1128 04:03:53.995438  384793 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:03:53.995455  384793 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I1128 04:03:54.002840  384793 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I1128 04:03:54.003953  384793 api_server.go:141] control plane version: v1.16.0
	I1128 04:03:54.003972  384793 api_server.go:131] duration metric: took 8.527968ms to wait for apiserver health ...
	I1128 04:03:54.003980  384793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:03:54.008155  384793 system_pods.go:59] 4 kube-system pods found
	I1128 04:03:54.008179  384793 system_pods.go:61] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.008184  384793 system_pods.go:61] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.008192  384793 system_pods.go:61] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.008196  384793 system_pods.go:61] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.008202  384793 system_pods.go:74] duration metric: took 4.21636ms to wait for pod list to return data ...
	I1128 04:03:54.008209  384793 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:03:54.010577  384793 default_sa.go:45] found service account: "default"
	I1128 04:03:54.010597  384793 default_sa.go:55] duration metric: took 2.383201ms for default service account to be created ...
	I1128 04:03:54.010603  384793 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:03:54.014085  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.014107  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.014114  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.014121  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.014125  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.014142  384793 retry.go:31] will retry after 305.81254ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.325645  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.325690  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.325700  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.325711  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.325717  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.325737  384793 retry.go:31] will retry after 265.004483ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.596427  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.596465  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.596472  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.596483  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.596491  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.596515  384793 retry.go:31] will retry after 379.763313ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.981569  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.981599  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.981607  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.981617  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.981624  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.981646  384793 retry.go:31] will retry after 439.396023ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.426531  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.426560  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.426565  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.426572  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.426577  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.426593  384793 retry.go:31] will retry after 551.563469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.983013  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.983042  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.983048  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.983055  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.983060  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.983076  384793 retry.go:31] will retry after 647.414701ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:56.635207  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:56.635238  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:56.635243  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:56.635251  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:56.635256  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:56.635276  384793 retry.go:31] will retry after 1.037316769s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.678748  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:57.678791  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:57.678800  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:57.678810  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:57.678815  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:57.678836  384793 retry.go:31] will retry after 1.167348672s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.565155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:58.851584  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:58.851615  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:58.851621  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:58.851627  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:58.851632  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:58.851649  384793 retry.go:31] will retry after 1.37796567s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.235244  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:00.235270  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:00.235276  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:00.235282  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:00.235288  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:00.235313  384793 retry.go:31] will retry after 2.090359712s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:02.330947  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:02.330984  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:02.331002  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:02.331013  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:02.331020  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:02.331041  384793 retry.go:31] will retry after 2.451255186s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.637193  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:04.787969  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:04.787999  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:04.788004  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:04.788011  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:04.788016  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:04.788033  384793 retry.go:31] will retry after 2.859833817s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:07.653629  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:07.653661  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:07.653667  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:07.653674  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:07.653679  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:07.653697  384793 retry.go:31] will retry after 4.226694897s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:06.721130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:09.789162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:11.886456  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:11.886488  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:11.886496  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:11.886503  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:11.886508  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:11.886538  384793 retry.go:31] will retry after 4.177038986s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:16.069291  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:16.069324  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:16.069330  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:16.069336  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:16.069341  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:16.069359  384793 retry.go:31] will retry after 4.273733761s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:15.869195  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:18.945228  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:20.347960  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:20.347992  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:20.347998  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:20.348004  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:20.348009  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:20.348028  384793 retry.go:31] will retry after 6.790786839s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:27.147442  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:27.147481  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:27.147489  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:27.147496  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Pending
	I1128 04:04:27.147506  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:27.147513  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:27.147532  384793 retry.go:31] will retry after 7.530763623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:25.021154  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:28.093157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.177177  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.684745  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:34.684783  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:34.684792  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:34.684799  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:34.684807  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:34.684813  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:34.684835  384793 retry.go:31] will retry after 10.243202989s: missing components: etcd, kube-apiserver, kube-controller-manager
	I1128 04:04:37.245170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:43.325131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:44.935423  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:04:44.935456  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:44.935462  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:04:44.935469  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Pending
	I1128 04:04:44.935474  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Pending
	I1128 04:04:44.935480  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:44.935486  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:44.935493  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:44.935498  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:44.935517  384793 retry.go:31] will retry after 15.895769684s: missing components: kube-apiserver, kube-controller-manager
	I1128 04:04:46.397235  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:52.481117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:55.549226  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:00.839171  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:05:00.839203  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:05:00.839209  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:05:00.839213  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Running
	I1128 04:05:00.839217  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Running
	I1128 04:05:00.839221  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:05:00.839225  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:05:00.839231  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:05:00.839236  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:05:00.839245  384793 system_pods.go:126] duration metric: took 1m6.828635432s to wait for k8s-apps to be running ...
	I1128 04:05:00.839253  384793 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:05:00.839308  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:05:00.858602  384793 system_svc.go:56] duration metric: took 19.336447ms WaitForService to wait for kubelet.
	I1128 04:05:00.858640  384793 kubeadm.go:581] duration metric: took 1m14.448764188s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:05:00.858663  384793 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:05:00.862657  384793 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:05:00.862682  384793 node_conditions.go:123] node cpu capacity is 2
	I1128 04:05:00.862695  384793 node_conditions.go:105] duration metric: took 4.026622ms to run NodePressure ...
	I1128 04:05:00.862709  384793 start.go:228] waiting for startup goroutines ...
	I1128 04:05:00.862721  384793 start.go:233] waiting for cluster config update ...
	I1128 04:05:00.862736  384793 start.go:242] writing updated cluster config ...
	I1128 04:05:00.863037  384793 ssh_runner.go:195] Run: rm -f paused
	I1128 04:05:00.914674  384793 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1128 04:05:00.916795  384793 out.go:177] 
	W1128 04:05:00.918292  384793 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1128 04:05:00.919711  384793 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1128 04:05:00.921263  384793 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-666657" cluster and "default" namespace by default
	I1128 04:05:01.629125  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:04.701205  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:10.781216  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:13.853213  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:19.933127  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:23.005456  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:29.085157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:32.161103  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:38.237107  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:41.313150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:47.389244  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:50.461131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:56.541162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:59.613200  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:05.693144  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:08.765184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:14.845161  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:17.921139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:23.997190  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:27.069225  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:33.149188  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:36.221163  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:42.301167  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:45.373156  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:51.453155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:54.525189  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:57.526358  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:06:57.526408  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:06:57.528448  388252 machine.go:91] provisioned docker machine in 4m37.381939051s
	I1128 04:06:57.528492  388252 fix.go:56] fixHost completed within 4m37.404595738s
	I1128 04:06:57.528498  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 4m37.404645524s
	W1128 04:06:57.528514  388252 start.go:691] error starting host: provision: host is not running
	W1128 04:06:57.528751  388252 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1128 04:06:57.528762  388252 start.go:706] Will try again in 5 seconds ...
	I1128 04:07:02.528995  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:07:02.529144  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 79.815µs
	I1128 04:07:02.529172  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:07:02.529180  388252 fix.go:54] fixHost starting: 
	I1128 04:07:02.529654  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:07:02.529689  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:07:02.545443  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I1128 04:07:02.546041  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:07:02.546627  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:07:02.546657  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:07:02.547002  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:07:02.547202  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:02.547393  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:07:02.549209  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Stopped err=<nil>
	I1128 04:07:02.549234  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	W1128 04:07:02.549378  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:07:02.551250  388252 out.go:177] * Restarting existing kvm2 VM for "embed-certs-672176" ...
	I1128 04:07:02.552611  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Start
	I1128 04:07:02.552792  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring networks are active...
	I1128 04:07:02.553615  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network default is active
	I1128 04:07:02.553928  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network mk-embed-certs-672176 is active
	I1128 04:07:02.554371  388252 main.go:141] libmachine: (embed-certs-672176) Getting domain xml...
	I1128 04:07:02.555218  388252 main.go:141] libmachine: (embed-certs-672176) Creating domain...
	I1128 04:07:03.867073  388252 main.go:141] libmachine: (embed-certs-672176) Waiting to get IP...
	I1128 04:07:03.868115  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:03.868595  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:03.868706  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:03.868567  389161 retry.go:31] will retry after 306.367802ms: waiting for machine to come up
	I1128 04:07:04.176148  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.176727  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.176760  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.176665  389161 retry.go:31] will retry after 349.820346ms: waiting for machine to come up
	I1128 04:07:04.528319  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.528804  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.528830  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.528753  389161 retry.go:31] will retry after 434.816613ms: waiting for machine to come up
	I1128 04:07:04.965453  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.965931  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.965964  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.965859  389161 retry.go:31] will retry after 504.812349ms: waiting for machine to come up
	I1128 04:07:05.472644  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.473150  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.473181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.473089  389161 retry.go:31] will retry after 512.859795ms: waiting for machine to come up
	I1128 04:07:05.987622  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.988077  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.988101  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.988023  389161 retry.go:31] will retry after 578.673806ms: waiting for machine to come up
	I1128 04:07:06.568420  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:06.568923  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:06.568957  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:06.568863  389161 retry.go:31] will retry after 1.101477644s: waiting for machine to come up
	I1128 04:07:07.671698  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:07.672126  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:07.672156  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:07.672054  389161 retry.go:31] will retry after 1.379684082s: waiting for machine to come up
	I1128 04:07:09.053227  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:09.053918  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:09.053950  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:09.053851  389161 retry.go:31] will retry after 1.775284772s: waiting for machine to come up
	I1128 04:07:10.831571  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:10.832140  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:10.832177  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:10.832065  389161 retry.go:31] will retry after 2.005203426s: waiting for machine to come up
	I1128 04:07:12.838667  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:12.839159  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:12.839187  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:12.839113  389161 retry.go:31] will retry after 2.403192486s: waiting for machine to come up
	I1128 04:07:15.244005  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:15.244513  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:15.244553  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:15.244427  389161 retry.go:31] will retry after 2.329820043s: waiting for machine to come up
	I1128 04:07:17.576268  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:17.576707  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:17.576748  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:17.576652  389161 retry.go:31] will retry after 4.220303586s: waiting for machine to come up
	I1128 04:07:21.801976  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802441  388252 main.go:141] libmachine: (embed-certs-672176) Found IP for machine: 192.168.72.208
	I1128 04:07:21.802469  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has current primary IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802483  388252 main.go:141] libmachine: (embed-certs-672176) Reserving static IP address...
	I1128 04:07:21.802890  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.802920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | skip adding static IP to network mk-embed-certs-672176 - found existing host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"}
	I1128 04:07:21.802939  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Getting to WaitForSSH function...
	I1128 04:07:21.802955  388252 main.go:141] libmachine: (embed-certs-672176) Reserved static IP address: 192.168.72.208
	I1128 04:07:21.802967  388252 main.go:141] libmachine: (embed-certs-672176) Waiting for SSH to be available...
	I1128 04:07:21.805675  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806052  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.806086  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806212  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH client type: external
	I1128 04:07:21.806237  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH private key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa (-rw-------)
	I1128 04:07:21.806261  388252 main.go:141] libmachine: (embed-certs-672176) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 04:07:21.806272  388252 main.go:141] libmachine: (embed-certs-672176) DBG | About to run SSH command:
	I1128 04:07:21.806284  388252 main.go:141] libmachine: (embed-certs-672176) DBG | exit 0
	I1128 04:07:21.897047  388252 main.go:141] libmachine: (embed-certs-672176) DBG | SSH cmd err, output: <nil>: 
	I1128 04:07:21.897443  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetConfigRaw
	I1128 04:07:21.898164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:21.901014  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901421  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.901454  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901679  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:07:21.901872  388252 machine.go:88] provisioning docker machine ...
	I1128 04:07:21.901891  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:21.902121  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902304  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:07:21.902318  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902482  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:21.905282  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905757  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.905798  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905977  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:21.906187  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906383  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906565  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:21.906734  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:21.907224  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:21.907254  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:07:22.042525  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-672176
	
	I1128 04:07:22.042553  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.045516  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.045916  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.045961  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.046143  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.046353  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046526  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046676  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.046861  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.047186  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.047207  388252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-672176' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-672176/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-672176' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:07:22.179515  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:07:22.179552  388252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 04:07:22.179578  388252 buildroot.go:174] setting up certificates
	I1128 04:07:22.179591  388252 provision.go:83] configureAuth start
	I1128 04:07:22.179602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:22.179940  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:22.182782  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183167  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.183199  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183344  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.185770  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186158  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.186195  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186348  388252 provision.go:138] copyHostCerts
	I1128 04:07:22.186407  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 04:07:22.186418  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 04:07:22.186494  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 04:07:22.186609  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 04:07:22.186623  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 04:07:22.186658  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 04:07:22.186756  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 04:07:22.186772  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 04:07:22.186830  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 04:07:22.186915  388252 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.embed-certs-672176 san=[192.168.72.208 192.168.72.208 localhost 127.0.0.1 minikube embed-certs-672176]
	I1128 04:07:22.268178  388252 provision.go:172] copyRemoteCerts
	I1128 04:07:22.268250  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:07:22.268305  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.270816  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271152  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.271181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271382  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.271571  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.271730  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.271880  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.362340  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 04:07:22.387591  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 04:07:22.412169  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 04:07:22.437185  388252 provision.go:86] duration metric: configureAuth took 257.574597ms
	I1128 04:07:22.437223  388252 buildroot.go:189] setting minikube options for container-runtime
	I1128 04:07:22.437418  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:07:22.437496  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.440503  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.440937  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.440984  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.441148  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.441414  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441626  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441808  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.442043  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.442369  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.442386  388252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:07:22.778314  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:07:22.778344  388252 machine.go:91] provisioned docker machine in 876.457785ms
	I1128 04:07:22.778392  388252 start.go:300] post-start starting for "embed-certs-672176" (driver="kvm2")
	I1128 04:07:22.778413  388252 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:07:22.778463  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:22.778894  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:07:22.778934  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.781750  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782161  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.782203  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782336  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.782653  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.782870  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.783045  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.876530  388252 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:07:22.881442  388252 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 04:07:22.881472  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 04:07:22.881541  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 04:07:22.881618  388252 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 04:07:22.881701  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:07:22.891393  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:22.914734  388252 start.go:303] post-start completed in 136.316733ms
	I1128 04:07:22.914771  388252 fix.go:56] fixHost completed within 20.385588986s
	I1128 04:07:22.914800  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.917856  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918267  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.918301  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918449  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.918697  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.918898  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.919051  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.919230  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.919548  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.919561  388252 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 04:07:23.037790  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701144442.982632661
	
	I1128 04:07:23.037817  388252 fix.go:206] guest clock: 1701144442.982632661
	I1128 04:07:23.037828  388252 fix.go:219] Guest: 2023-11-28 04:07:22.982632661 +0000 UTC Remote: 2023-11-28 04:07:22.914776935 +0000 UTC m=+302.972189005 (delta=67.855726ms)
	I1128 04:07:23.037853  388252 fix.go:190] guest clock delta is within tolerance: 67.855726ms
	I1128 04:07:23.037860  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 20.508701455s
	I1128 04:07:23.037879  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.038196  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:23.040928  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041276  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.041309  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041473  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042009  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042217  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042315  388252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:07:23.042380  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.042447  388252 ssh_runner.go:195] Run: cat /version.json
	I1128 04:07:23.042479  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.045070  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045430  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.045459  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045478  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045634  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.045826  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.045987  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.045998  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.046020  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.046131  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.046197  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.046338  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.046455  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.046594  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.158653  388252 ssh_runner.go:195] Run: systemctl --version
	I1128 04:07:23.164496  388252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:07:23.313946  388252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 04:07:23.320220  388252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 04:07:23.320326  388252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:07:23.339262  388252 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 04:07:23.339296  388252 start.go:472] detecting cgroup driver to use...
	I1128 04:07:23.339401  388252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:07:23.352989  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:07:23.367735  388252 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:07:23.367797  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:07:23.382143  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:07:23.395983  388252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 04:07:23.513475  388252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:07:23.657449  388252 docker.go:219] disabling docker service ...
	I1128 04:07:23.657531  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:07:23.672662  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:07:23.685142  388252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:07:23.810404  388252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:07:23.929413  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:07:23.942971  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:07:23.961419  388252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 04:07:23.961493  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.971562  388252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 04:07:23.971643  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.981660  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.992472  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:24.002748  388252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 04:07:24.016234  388252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 04:07:24.025560  388252 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 04:07:24.025629  388252 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 04:07:24.039085  388252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 04:07:24.048324  388252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 04:07:24.160507  388252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 04:07:24.331205  388252 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 04:07:24.331292  388252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 04:07:24.336480  388252 start.go:540] Will wait 60s for crictl version
	I1128 04:07:24.336541  388252 ssh_runner.go:195] Run: which crictl
	I1128 04:07:24.341052  388252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 04:07:24.376784  388252 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 04:07:24.376910  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.425035  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.485230  388252 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 04:07:24.486822  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:24.490127  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490529  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:24.490558  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490733  388252 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1128 04:07:24.494881  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:24.510006  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:07:24.510097  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:24.549615  388252 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 04:07:24.549699  388252 ssh_runner.go:195] Run: which lz4
	I1128 04:07:24.554039  388252 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 04:07:24.558068  388252 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 04:07:24.558101  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 04:07:26.358503  388252 crio.go:444] Took 1.804493 seconds to copy over tarball
	I1128 04:07:26.358586  388252 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 04:07:29.679041  388252 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.320417818s)
	I1128 04:07:29.679072  388252 crio.go:451] Took 3.320535 seconds to extract the tarball
	I1128 04:07:29.679086  388252 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 04:07:29.723905  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:29.774544  388252 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 04:07:29.774574  388252 cache_images.go:84] Images are preloaded, skipping loading
	I1128 04:07:29.774683  388252 ssh_runner.go:195] Run: crio config
	I1128 04:07:29.841740  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:29.841767  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:29.841792  388252 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 04:07:29.841826  388252 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-672176 NodeName:embed-certs-672176 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 04:07:29.842004  388252 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-672176"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 04:07:29.842115  388252 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-672176 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 04:07:29.842184  388252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 04:07:29.854017  388252 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 04:07:29.854103  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 04:07:29.863871  388252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1128 04:07:29.880656  388252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 04:07:29.899138  388252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1128 04:07:29.919697  388252 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I1128 04:07:29.924087  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:29.936814  388252 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176 for IP: 192.168.72.208
	I1128 04:07:29.936851  388252 certs.go:190] acquiring lock for shared ca certs: {Name:mk57c0483467fb0022a439f1b546194ca653d1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:07:29.937053  388252 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key
	I1128 04:07:29.937097  388252 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key
	I1128 04:07:29.937198  388252 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/client.key
	I1128 04:07:29.937274  388252 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key.9e96c9f0
	I1128 04:07:29.937334  388252 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key
	I1128 04:07:29.937491  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem (1338 bytes)
	W1128 04:07:29.937524  388252 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515_empty.pem, impossibly tiny 0 bytes
	I1128 04:07:29.937535  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 04:07:29.937561  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem (1078 bytes)
	I1128 04:07:29.937586  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem (1123 bytes)
	I1128 04:07:29.937607  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem (1675 bytes)
	I1128 04:07:29.937698  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:29.938553  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 04:07:29.963444  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 04:07:29.988035  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 04:07:30.012981  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 04:07:30.219926  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 04:07:30.244077  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 04:07:30.268833  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 04:07:30.293921  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 04:07:30.322839  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /usr/share/ca-certificates/3405152.pem (1708 bytes)
	I1128 04:07:30.349783  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 04:07:30.374569  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem --> /usr/share/ca-certificates/340515.pem (1338 bytes)
	I1128 04:07:30.401804  388252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 04:07:30.420925  388252 ssh_runner.go:195] Run: openssl version
	I1128 04:07:30.427193  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3405152.pem && ln -fs /usr/share/ca-certificates/3405152.pem /etc/ssl/certs/3405152.pem"
	I1128 04:07:30.439369  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444359  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444455  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.451032  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3405152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 04:07:30.464110  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 04:07:30.477275  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483239  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483314  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.489884  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 04:07:30.501967  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340515.pem && ln -fs /usr/share/ca-certificates/340515.pem /etc/ssl/certs/340515.pem"
	I1128 04:07:30.514081  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519079  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519157  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.525194  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340515.pem /etc/ssl/certs/51391683.0"
	I1128 04:07:30.536594  388252 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 04:07:30.541041  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 04:07:30.547008  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 04:07:30.554317  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 04:07:30.561063  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 04:07:30.567355  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 04:07:30.573719  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 04:07:30.580010  388252 kubeadm.go:404] StartCluster: {Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:07:30.580166  388252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 04:07:30.580237  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:30.623908  388252 cri.go:89] found id: ""
	I1128 04:07:30.623980  388252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 04:07:30.635847  388252 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 04:07:30.635911  388252 kubeadm.go:636] restartCluster start
	I1128 04:07:30.635982  388252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 04:07:30.646523  388252 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.647648  388252 kubeconfig.go:92] found "embed-certs-672176" server: "https://192.168.72.208:8443"
	I1128 04:07:30.650037  388252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 04:07:30.660625  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.660703  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.674234  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.674258  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.674309  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.687276  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.188012  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.188122  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.201481  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.688057  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.688152  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.701564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.188188  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.201049  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.688113  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.688191  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.700824  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.187399  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.187517  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.200128  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.687562  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.687688  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.700564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.188276  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.188406  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.201686  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.688327  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.688426  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.701023  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.187672  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.187809  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.200598  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.688485  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.688565  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.701518  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.188131  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.188213  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.201708  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.688321  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.688430  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.701852  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.187395  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.187539  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.200267  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.688365  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.688447  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.701921  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.187456  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.187615  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.201388  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.687819  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.687933  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.700584  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.188195  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.201557  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.688192  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.688268  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.700990  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.187806  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:40.187918  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:40.201110  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.660853  388252 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 04:07:40.660908  388252 kubeadm.go:1128] stopping kube-system containers ...
	I1128 04:07:40.660926  388252 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 04:07:40.661008  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:40.706945  388252 cri.go:89] found id: ""
	I1128 04:07:40.707017  388252 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 04:07:40.724988  388252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:07:40.735077  388252 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:07:40.735165  388252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745110  388252 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745146  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:40.870777  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:41.851187  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.047008  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.129329  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.194986  388252 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:07:42.195074  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.210225  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.727622  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.227063  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.726928  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.227709  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.727790  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.756952  388252 api_server.go:72] duration metric: took 2.561964065s to wait for apiserver process to appear ...
	I1128 04:07:44.756989  388252 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:07:44.757011  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.757778  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:44.757838  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.758268  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:45.258785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.416741  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.416771  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.416785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.484252  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.484292  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.758607  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.765159  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:49.765189  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.258770  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.264464  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.264499  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.759164  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.765206  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.765246  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:51.258591  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:51.264758  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I1128 04:07:51.274077  388252 api_server.go:141] control plane version: v1.28.4
	I1128 04:07:51.274110  388252 api_server.go:131] duration metric: took 6.517112692s to wait for apiserver health ...
	I1128 04:07:51.274122  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:51.274130  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:51.276088  388252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:07:51.277582  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:07:51.302050  388252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:07:51.355400  388252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:07:51.371543  388252 system_pods.go:59] 8 kube-system pods found
	I1128 04:07:51.371592  388252 system_pods.go:61] "coredns-5dd5756b68-296l9" [a79e060e-b757-46b9-882e-5f065aed0f46] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 04:07:51.371605  388252 system_pods.go:61] "etcd-embed-certs-672176" [610938df-5b75-4fef-b632-19af73d74dab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 04:07:51.371623  388252 system_pods.go:61] "kube-apiserver-embed-certs-672176" [3e513b84-29f4-4285-aea3-963078fa9e74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 04:07:51.371633  388252 system_pods.go:61] "kube-controller-manager-embed-certs-672176" [6fb9a912-0c05-47d1-8420-26d0bbbe92c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 04:07:51.371640  388252 system_pods.go:61] "kube-proxy-4cvwh" [9882c0aa-5c66-4b53-8c8e-827c1cddaac5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 04:07:51.371652  388252 system_pods.go:61] "kube-scheduler-embed-certs-672176" [2d7c706d-f01b-4e80-ba35-8ef97f27faa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 04:07:51.371659  388252 system_pods.go:61] "metrics-server-57f55c9bc5-sbkpc" [ea558db5-2aab-4e1e-aa62-a4595172d108] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:07:51.371666  388252 system_pods.go:61] "storage-provisioner" [96737dd7-931e-4ac5-b662-c560a4b6642e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 04:07:51.371676  388252 system_pods.go:74] duration metric: took 16.247766ms to wait for pod list to return data ...
	I1128 04:07:51.371694  388252 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:07:51.376458  388252 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:07:51.376495  388252 node_conditions.go:123] node cpu capacity is 2
	I1128 04:07:51.376508  388252 node_conditions.go:105] duration metric: took 4.80925ms to run NodePressure ...
	I1128 04:07:51.376539  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:51.778110  388252 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 04:07:51.786916  388252 kubeadm.go:787] kubelet initialised
	I1128 04:07:51.787002  388252 kubeadm.go:788] duration metric: took 8.859672ms waiting for restarted kubelet to initialise ...
	I1128 04:07:51.787019  388252 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:07:51.799380  388252 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.807214  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807261  388252 pod_ready.go:81] duration metric: took 7.829357ms waiting for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.807274  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807299  388252 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.814516  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814550  388252 pod_ready.go:81] duration metric: took 7.235029ms waiting for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.814569  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814576  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.827729  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827759  388252 pod_ready.go:81] duration metric: took 13.172422ms waiting for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.827768  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827774  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:54.190842  388252 pod_ready.go:102] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:07:56.189656  388252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.189758  388252 pod_ready.go:81] duration metric: took 4.36196703s waiting for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.189779  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196462  388252 pod_ready.go:92] pod "kube-proxy-4cvwh" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.196503  388252 pod_ready.go:81] duration metric: took 6.707028ms waiting for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196517  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:58.590819  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:00.590953  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:02.595296  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:04.592801  388252 pod_ready.go:92] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:08:04.592826  388252 pod_ready.go:81] duration metric: took 8.396301174s waiting for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:04.592839  388252 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:06.618794  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:08.619204  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:11.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:13.618160  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:15.619404  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:17.620107  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:20.118789  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:22.119626  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:24.619088  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:26.619353  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:29.118548  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:31.118625  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:33.122964  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:35.620077  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:38.118800  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:40.618996  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:42.619252  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:45.118801  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:47.118987  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:49.619233  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:52.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:54.120044  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:56.619768  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:59.119321  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:01.119784  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:03.619289  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:06.119695  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:08.618767  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:10.620952  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:13.119086  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:15.121912  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:17.618200  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:19.619428  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:22.117316  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:24.118147  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:26.119945  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:28.619687  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:30.619772  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:33.118414  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:35.622173  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:38.118091  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:40.118723  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:42.119551  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:44.119931  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:46.619572  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:48.620898  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:51.118343  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:53.619215  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:56.119440  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:58.620299  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:01.118313  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:03.618615  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:05.619056  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:07.622475  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:10.117858  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:12.119468  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:14.619203  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:16.619540  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:19.118749  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:21.619618  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:23.620623  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:26.118183  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:28.118246  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:30.618282  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:33.117841  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:35.122904  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:37.619116  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:40.118304  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:42.618264  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:44.621653  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:47.119733  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:49.618284  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:51.619099  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:54.118728  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:56.121041  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:58.618237  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:00.619430  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:03.119263  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:05.619558  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:07.620571  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:10.117924  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:12.118001  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:14.119916  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:16.618621  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:18.620149  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:21.118296  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:23.118614  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:25.119100  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:27.120549  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:29.618264  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:32.119075  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:34.619939  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:37.119561  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:39.119896  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:41.617842  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:43.618594  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:45.618757  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:47.619342  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:49.623012  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:52.119438  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:54.121760  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:56.620252  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:59.120191  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:01.618305  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:03.619616  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:04.593067  388252 pod_ready.go:81] duration metric: took 4m0.000190987s waiting for pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace to be "Ready" ...
	E1128 04:12:04.593121  388252 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 04:12:04.593139  388252 pod_ready.go:38] duration metric: took 4m12.806107308s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:04.593168  388252 kubeadm.go:640] restartCluster took 4m33.957247441s
	W1128 04:12:04.593251  388252 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 04:12:04.593282  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 04:12:18.614553  388252 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.021224516s)
	I1128 04:12:18.614653  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:18.628836  388252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:12:18.640242  388252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:12:18.649879  388252 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:12:18.649930  388252 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 04:12:18.702438  388252 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 04:12:18.702606  388252 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:12:18.867279  388252 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:12:18.867400  388252 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:12:18.867534  388252 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:12:19.120397  388252 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:12:19.122246  388252 out.go:204]   - Generating certificates and keys ...
	I1128 04:12:19.122357  388252 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:12:19.122474  388252 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:12:19.122646  388252 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:12:19.122757  388252 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:12:19.122856  388252 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:12:19.122934  388252 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:12:19.123028  388252 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:12:19.123173  388252 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:12:19.123270  388252 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:12:19.123380  388252 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:12:19.123435  388252 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:12:19.123517  388252 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:12:19.397687  388252 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:12:19.545433  388252 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:12:19.753655  388252 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:12:19.867889  388252 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:12:19.868510  388252 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:12:19.873288  388252 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:12:19.875099  388252 out.go:204]   - Booting up control plane ...
	I1128 04:12:19.875243  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:12:19.875362  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:12:19.875447  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:12:19.890902  388252 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:12:19.891790  388252 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:12:19.891903  388252 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:12:20.033327  388252 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:12:28.539450  388252 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.505311 seconds
	I1128 04:12:28.539554  388252 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:12:28.556290  388252 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:12:29.115246  388252 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:12:29.115517  388252 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-672176 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:12:29.632584  388252 kubeadm.go:322] [bootstrap-token] Using token: fhdku8.6c57fpjso9w7rrxv
	I1128 04:12:29.634185  388252 out.go:204]   - Configuring RBAC rules ...
	I1128 04:12:29.634320  388252 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:12:29.640994  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:12:29.653566  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:12:29.660519  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:12:29.665018  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:12:29.677514  388252 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:12:29.691421  388252 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:12:29.939496  388252 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:12:30.049393  388252 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:12:30.049425  388252 kubeadm.go:322] 
	I1128 04:12:30.049538  388252 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:12:30.049559  388252 kubeadm.go:322] 
	I1128 04:12:30.049652  388252 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:12:30.049683  388252 kubeadm.go:322] 
	I1128 04:12:30.049721  388252 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:12:30.049806  388252 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:12:30.049876  388252 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:12:30.049884  388252 kubeadm.go:322] 
	I1128 04:12:30.049983  388252 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:12:30.050004  388252 kubeadm.go:322] 
	I1128 04:12:30.050076  388252 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:12:30.050088  388252 kubeadm.go:322] 
	I1128 04:12:30.050145  388252 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:12:30.050234  388252 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:12:30.050337  388252 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:12:30.050347  388252 kubeadm.go:322] 
	I1128 04:12:30.050444  388252 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:12:30.050532  388252 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:12:30.050539  388252 kubeadm.go:322] 
	I1128 04:12:30.050633  388252 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fhdku8.6c57fpjso9w7rrxv \
	I1128 04:12:30.050753  388252 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:12:30.050784  388252 kubeadm.go:322] 	--control-plane 
	I1128 04:12:30.050790  388252 kubeadm.go:322] 
	I1128 04:12:30.050888  388252 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:12:30.050898  388252 kubeadm.go:322] 
	I1128 04:12:30.050994  388252 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fhdku8.6c57fpjso9w7rrxv \
	I1128 04:12:30.051118  388252 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:12:30.051556  388252 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:12:30.051597  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:12:30.051611  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:12:30.053491  388252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:12:30.055147  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:12:30.088905  388252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:12:30.132297  388252 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:12:30.132365  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=embed-certs-672176 minikube.k8s.io/updated_at=2023_11_28T04_12_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.132370  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.459401  388252 ops.go:34] apiserver oom_adj: -16
	I1128 04:12:30.459555  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.568049  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:31.166991  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:31.666953  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:32.167174  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:32.666615  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:33.166464  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:33.667438  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:34.167422  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:34.666474  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:35.167309  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:35.667310  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:36.166896  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:36.667030  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:37.167265  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:37.667172  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:38.166893  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:38.667196  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:39.166889  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:39.667205  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:40.167112  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:40.667377  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:41.167422  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:41.666650  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:42.167425  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:42.308007  388252 kubeadm.go:1081] duration metric: took 12.175710221s to wait for elevateKubeSystemPrivileges.
	I1128 04:12:42.308051  388252 kubeadm.go:406] StartCluster complete in 5m11.728054603s
	I1128 04:12:42.308070  388252 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:12:42.308149  388252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:12:42.310104  388252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:12:42.310352  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:12:42.310440  388252 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:12:42.310557  388252 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-672176"
	I1128 04:12:42.310581  388252 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-672176"
	W1128 04:12:42.310588  388252 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:12:42.310601  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:12:42.310668  388252 addons.go:69] Setting default-storageclass=true in profile "embed-certs-672176"
	I1128 04:12:42.310684  388252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-672176"
	I1128 04:12:42.310698  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.311002  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311040  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.311081  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311113  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.311110  388252 addons.go:69] Setting metrics-server=true in profile "embed-certs-672176"
	I1128 04:12:42.311127  388252 addons.go:231] Setting addon metrics-server=true in "embed-certs-672176"
	W1128 04:12:42.311134  388252 addons.go:240] addon metrics-server should already be in state true
	I1128 04:12:42.311167  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.311539  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311584  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.328327  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I1128 04:12:42.328769  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.329061  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I1128 04:12:42.329541  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.329720  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.329731  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.329740  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1128 04:12:42.330179  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.330195  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.330193  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.330557  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.330572  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.330768  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.331035  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.331050  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.331073  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.331151  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.331476  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.332248  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.332359  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.334824  388252 addons.go:231] Setting addon default-storageclass=true in "embed-certs-672176"
	W1128 04:12:42.334849  388252 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:12:42.334882  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.335253  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.335333  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.352633  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40133
	I1128 04:12:42.353356  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.353736  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37797
	I1128 04:12:42.353967  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.353982  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.354364  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.354559  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.355670  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37125
	I1128 04:12:42.355716  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.356215  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.356764  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.356808  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.356772  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.356965  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.356984  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.359122  388252 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:12:42.357414  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.357431  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.360619  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:12:42.360666  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:12:42.360695  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.360632  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.360981  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.361031  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.362951  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.365190  388252 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:12:42.364654  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.365222  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.365254  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.365285  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.365431  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.367020  388252 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:12:42.367079  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:12:42.367146  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.367154  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.367365  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.370570  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.371152  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.371177  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.371181  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.371352  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.371712  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.371881  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.381549  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I1128 04:12:42.382167  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.382667  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.382726  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.383173  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.383387  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.384921  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.385265  388252 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:12:42.385284  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:12:42.385305  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.388576  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.389134  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.389197  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.389203  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.389439  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.389617  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.389783  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.513762  388252 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-672176" context rescaled to 1 replicas
	I1128 04:12:42.513815  388252 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:12:42.515768  388252 out.go:177] * Verifying Kubernetes components...
	I1128 04:12:42.517584  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:42.565623  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:12:42.565648  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:12:42.583220  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:12:42.591345  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:12:42.596578  388252 node_ready.go:35] waiting up to 6m0s for node "embed-certs-672176" to be "Ready" ...
	I1128 04:12:42.596679  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:12:42.615808  388252 node_ready.go:49] node "embed-certs-672176" has status "Ready":"True"
	I1128 04:12:42.615836  388252 node_ready.go:38] duration metric: took 19.228862ms waiting for node "embed-certs-672176" to be "Ready" ...
	I1128 04:12:42.615848  388252 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:42.637885  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:12:42.637913  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:12:42.667328  388252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:42.863842  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:12:42.863897  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:12:42.947911  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:12:44.507109  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.923846344s)
	I1128 04:12:44.507207  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.507227  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.507634  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.507655  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.507667  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.507677  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.509371  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:44.509455  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.509479  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.585867  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.585899  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.586220  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.586243  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.586371  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:44.829833  388252 pod_ready.go:102] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:45.125413  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.534026387s)
	I1128 04:12:45.125477  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.125492  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.125490  388252 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.528780545s)
	I1128 04:12:45.125516  388252 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1128 04:12:45.125839  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.125859  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.125874  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.125883  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.126171  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.126184  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.126201  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.429252  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.481263549s)
	I1128 04:12:45.429311  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.429327  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.429703  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.429772  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.429787  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.429797  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.429727  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.430078  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.430119  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.430135  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.430149  388252 addons.go:467] Verifying addon metrics-server=true in "embed-certs-672176"
	I1128 04:12:45.432135  388252 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 04:12:45.433222  388252 addons.go:502] enable addons completed in 3.122792003s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 04:12:46.830144  388252 pod_ready.go:102] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:47.831025  388252 pod_ready.go:92] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.831057  388252 pod_ready.go:81] duration metric: took 5.163697448s waiting for pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.831067  388252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.837379  388252 pod_ready.go:92] pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.837400  388252 pod_ready.go:81] duration metric: took 6.325699ms waiting for pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.837411  388252 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.842711  388252 pod_ready.go:92] pod "etcd-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.842736  388252 pod_ready.go:81] duration metric: took 5.316988ms waiting for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.842744  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.848771  388252 pod_ready.go:92] pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.848792  388252 pod_ready.go:81] duration metric: took 6.042201ms waiting for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.848801  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.854704  388252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.854729  388252 pod_ready.go:81] duration metric: took 5.922154ms waiting for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.854737  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7srf" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.227290  388252 pod_ready.go:92] pod "kube-proxy-q7srf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:48.227318  388252 pod_ready.go:81] duration metric: took 372.573682ms waiting for pod "kube-proxy-q7srf" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.227331  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.627054  388252 pod_ready.go:92] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:48.627088  388252 pod_ready.go:81] duration metric: took 399.749681ms waiting for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.627097  388252 pod_ready.go:38] duration metric: took 6.011238284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:48.627114  388252 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:12:48.627164  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:12:48.645283  388252 api_server.go:72] duration metric: took 6.131420029s to wait for apiserver process to appear ...
	I1128 04:12:48.645317  388252 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:12:48.645345  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:12:48.651616  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I1128 04:12:48.653231  388252 api_server.go:141] control plane version: v1.28.4
	I1128 04:12:48.653252  388252 api_server.go:131] duration metric: took 7.92759ms to wait for apiserver health ...
	I1128 04:12:48.653262  388252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:12:48.831400  388252 system_pods.go:59] 9 kube-system pods found
	I1128 04:12:48.831430  388252 system_pods.go:61] "coredns-5dd5756b68-48xtx" [1229f57f-a420-4c63-ae05-8a051f556bbd] Running
	I1128 04:12:48.831435  388252 system_pods.go:61] "coredns-5dd5756b68-qws7p" [19e86a95-23a4-4222-955d-9c560db64c80] Running
	I1128 04:12:48.831439  388252 system_pods.go:61] "etcd-embed-certs-672176" [6591bb2b-2d10-4f8b-9d1a-919b39590717] Running
	I1128 04:12:48.831443  388252 system_pods.go:61] "kube-apiserver-embed-certs-672176" [0ddbb8ba-804f-43ef-a803-62570732f165] Running
	I1128 04:12:48.831447  388252 system_pods.go:61] "kube-controller-manager-embed-certs-672176" [8dcb6ffa-1e26-420f-b385-e145cf24282a] Running
	I1128 04:12:48.831451  388252 system_pods.go:61] "kube-proxy-q7srf" [a2390c61-7f2a-40ac-ad4c-c47e78a3eb90] Running
	I1128 04:12:48.831454  388252 system_pods.go:61] "kube-scheduler-embed-certs-672176" [973e06dd-2716-40fe-99ed-cf7844cd22b7] Running
	I1128 04:12:48.831461  388252 system_pods.go:61] "metrics-server-57f55c9bc5-ppnxv" [1c86fe3d-4460-4777-a7d7-57b1f6aad5f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:12:48.831466  388252 system_pods.go:61] "storage-provisioner" [3304cb38-897a-482f-9a9d-9e287aca2ce4] Running
	I1128 04:12:48.831473  388252 system_pods.go:74] duration metric: took 178.206375ms to wait for pod list to return data ...
	I1128 04:12:48.831481  388252 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:12:49.027724  388252 default_sa.go:45] found service account: "default"
	I1128 04:12:49.027754  388252 default_sa.go:55] duration metric: took 196.266769ms for default service account to be created ...
	I1128 04:12:49.027762  388252 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:12:49.231633  388252 system_pods.go:86] 9 kube-system pods found
	I1128 04:12:49.231663  388252 system_pods.go:89] "coredns-5dd5756b68-48xtx" [1229f57f-a420-4c63-ae05-8a051f556bbd] Running
	I1128 04:12:49.231669  388252 system_pods.go:89] "coredns-5dd5756b68-qws7p" [19e86a95-23a4-4222-955d-9c560db64c80] Running
	I1128 04:12:49.231673  388252 system_pods.go:89] "etcd-embed-certs-672176" [6591bb2b-2d10-4f8b-9d1a-919b39590717] Running
	I1128 04:12:49.231677  388252 system_pods.go:89] "kube-apiserver-embed-certs-672176" [0ddbb8ba-804f-43ef-a803-62570732f165] Running
	I1128 04:12:49.231682  388252 system_pods.go:89] "kube-controller-manager-embed-certs-672176" [8dcb6ffa-1e26-420f-b385-e145cf24282a] Running
	I1128 04:12:49.231687  388252 system_pods.go:89] "kube-proxy-q7srf" [a2390c61-7f2a-40ac-ad4c-c47e78a3eb90] Running
	I1128 04:12:49.231691  388252 system_pods.go:89] "kube-scheduler-embed-certs-672176" [973e06dd-2716-40fe-99ed-cf7844cd22b7] Running
	I1128 04:12:49.231697  388252 system_pods.go:89] "metrics-server-57f55c9bc5-ppnxv" [1c86fe3d-4460-4777-a7d7-57b1f6aad5f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:12:49.231702  388252 system_pods.go:89] "storage-provisioner" [3304cb38-897a-482f-9a9d-9e287aca2ce4] Running
	I1128 04:12:49.231712  388252 system_pods.go:126] duration metric: took 203.944338ms to wait for k8s-apps to be running ...
	I1128 04:12:49.231724  388252 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:12:49.231781  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:49.247634  388252 system_svc.go:56] duration metric: took 15.898994ms WaitForService to wait for kubelet.
	I1128 04:12:49.247662  388252 kubeadm.go:581] duration metric: took 6.733807391s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:12:49.247681  388252 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:12:49.426882  388252 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:12:49.426916  388252 node_conditions.go:123] node cpu capacity is 2
	I1128 04:12:49.426931  388252 node_conditions.go:105] duration metric: took 179.246183ms to run NodePressure ...
	I1128 04:12:49.426946  388252 start.go:228] waiting for startup goroutines ...
	I1128 04:12:49.426954  388252 start.go:233] waiting for cluster config update ...
	I1128 04:12:49.426965  388252 start.go:242] writing updated cluster config ...
	I1128 04:12:49.427242  388252 ssh_runner.go:195] Run: rm -f paused
	I1128 04:12:49.477142  388252 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:12:49.479448  388252 out.go:177] * Done! kubectl is now configured to use "embed-certs-672176" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 03:56:57 UTC, ends at Tue 2023-11-28 04:17:31 UTC. --
	Nov 28 04:17:30 no-preload-222348 crio[717]: time="2023-11-28 04:17:30.967909283Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145050967894526,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=a096ded6-8008-4b99-9fbf-a165651c5390 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:17:30 no-preload-222348 crio[717]: time="2023-11-28 04:17:30.968524326Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9a4450e5-a9c8-47a0-a3bc-aebf821fa2a2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:30 no-preload-222348 crio[717]: time="2023-11-28 04:17:30.968606066Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9a4450e5-a9c8-47a0-a3bc-aebf821fa2a2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:30 no-preload-222348 crio[717]: time="2023-11-28 04:17:30.968839256Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1957c12842b67439cf0fd2c8e6621ba2313b2ed1176bd562fcdfe9ca237e80b3,PodSandboxId:df72b36aadcf86ce69f73f311171efc2a1b4f48c3464932afba203d12db583f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701144164680918480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cf7h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbbfab4-753c-4925-9154-27a19052567a,},Annotations:map[string]string{io.kubernetes.container.hash: 7c242387,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850958f2fb6eb8d5bc32a7fe0b9286cf09a1787673ed0cc9dd96ee1eac0636bf,PodSandboxId:7940165f0057b45e3d32cc89cde399384640c76ed775ba4e21d198e99ee9f64b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701144164573598275,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37152287-4d4b-45db-a357-1468fc210bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc09e7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03135efda90532612c38c0353c67b59d1316a9173bb00c795a5437b198f81aa0,PodSandboxId:e56c380450d2377f41e918a7fd14471071a2ca2defeeeaaf2cf87db1e72faf24,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701144163821646144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kqgf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c63dad72-b046-4f33-b851-8ca60c237dd7,},Annotations:map[string]string{io.kubernetes.container.hash: b4feeb0b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:510892e048714ae0c99171fbd0aac85698eaa61741069edb01085a22bdcc9ac2,PodSandboxId:86e73488f5313d0ee2ebde20476582937e1ca9cef523aad278cdaa8028a9c846,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701144140625840021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6618416a9d62cdf9f0f3c0e83b58685f,},Annotations:map[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6544adf0def62fe77964c6e9e5b7c3b3e91408bed82ccc7ab9c53d397c9f769a,PodSandboxId:b6b101c20ad2a682a5607a93453575695f3e165aeac889d56cdc20ffd730a153,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701144140310038318,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35a8e2360c9fa006fd620573f15a218,},Annotations:map
[string]string{io.kubernetes.container.hash: 1a4b2524,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf7aa04e4dffc949886bb6d2b41dcb22da8affefee453973bf3ab390bef6943,PodSandboxId:7587ef7ab319935a952a759c1d4cf83b358408573ddeb4d4f7de916100d42941,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701144140137449225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5749f01db5a8e0f1bb15715c6
2c91664,},Annotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db05ce5a1b14eda3cf86a731b007a7a0315f0b4e6e0049f18b063f74a9fb9b7,PodSandboxId:a8f4db4a98220bdc2ff2b384292d6a434af6764be032721ce3cba474609b18f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701144139927862984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d58547c5472ad0261e3309d4e4dda4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 347dae6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9a4450e5-a9c8-47a0-a3bc-aebf821fa2a2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.012675063Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=95819c7f-285e-4ae8-877c-12f78a237414 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.012801546Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=95819c7f-285e-4ae8-877c-12f78a237414 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.014068041Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=119337ab-5d74-4de7-999b-00b7e92f4bad name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.014767395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145051014494699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=119337ab-5d74-4de7-999b-00b7e92f4bad name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.015357756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=33c9aab8-db9e-444f-9230-e71bf584f2b2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.015404270Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=33c9aab8-db9e-444f-9230-e71bf584f2b2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.015640390Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1957c12842b67439cf0fd2c8e6621ba2313b2ed1176bd562fcdfe9ca237e80b3,PodSandboxId:df72b36aadcf86ce69f73f311171efc2a1b4f48c3464932afba203d12db583f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701144164680918480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cf7h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbbfab4-753c-4925-9154-27a19052567a,},Annotations:map[string]string{io.kubernetes.container.hash: 7c242387,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850958f2fb6eb8d5bc32a7fe0b9286cf09a1787673ed0cc9dd96ee1eac0636bf,PodSandboxId:7940165f0057b45e3d32cc89cde399384640c76ed775ba4e21d198e99ee9f64b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701144164573598275,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37152287-4d4b-45db-a357-1468fc210bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc09e7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03135efda90532612c38c0353c67b59d1316a9173bb00c795a5437b198f81aa0,PodSandboxId:e56c380450d2377f41e918a7fd14471071a2ca2defeeeaaf2cf87db1e72faf24,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701144163821646144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kqgf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c63dad72-b046-4f33-b851-8ca60c237dd7,},Annotations:map[string]string{io.kubernetes.container.hash: b4feeb0b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:510892e048714ae0c99171fbd0aac85698eaa61741069edb01085a22bdcc9ac2,PodSandboxId:86e73488f5313d0ee2ebde20476582937e1ca9cef523aad278cdaa8028a9c846,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701144140625840021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6618416a9d62cdf9f0f3c0e83b58685f,},Annotations:map[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6544adf0def62fe77964c6e9e5b7c3b3e91408bed82ccc7ab9c53d397c9f769a,PodSandboxId:b6b101c20ad2a682a5607a93453575695f3e165aeac889d56cdc20ffd730a153,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701144140310038318,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35a8e2360c9fa006fd620573f15a218,},Annotations:map
[string]string{io.kubernetes.container.hash: 1a4b2524,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf7aa04e4dffc949886bb6d2b41dcb22da8affefee453973bf3ab390bef6943,PodSandboxId:7587ef7ab319935a952a759c1d4cf83b358408573ddeb4d4f7de916100d42941,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701144140137449225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5749f01db5a8e0f1bb15715c6
2c91664,},Annotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db05ce5a1b14eda3cf86a731b007a7a0315f0b4e6e0049f18b063f74a9fb9b7,PodSandboxId:a8f4db4a98220bdc2ff2b384292d6a434af6764be032721ce3cba474609b18f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701144139927862984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d58547c5472ad0261e3309d4e4dda4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 347dae6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=33c9aab8-db9e-444f-9230-e71bf584f2b2 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.058234935Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=aed7ea4f-fd57-4870-b93b-3789708eb6d0 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.058324383Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=aed7ea4f-fd57-4870-b93b-3789708eb6d0 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.059689883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=560cfbed-1621-46b6-8fc2-77ac0d71ed82 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.060136038Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145051060122901,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=560cfbed-1621-46b6-8fc2-77ac0d71ed82 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.061134606Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=66bd97c9-4449-48cf-b7d5-72ef87269cf3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.061188109Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=66bd97c9-4449-48cf-b7d5-72ef87269cf3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.061359488Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1957c12842b67439cf0fd2c8e6621ba2313b2ed1176bd562fcdfe9ca237e80b3,PodSandboxId:df72b36aadcf86ce69f73f311171efc2a1b4f48c3464932afba203d12db583f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701144164680918480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cf7h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbbfab4-753c-4925-9154-27a19052567a,},Annotations:map[string]string{io.kubernetes.container.hash: 7c242387,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850958f2fb6eb8d5bc32a7fe0b9286cf09a1787673ed0cc9dd96ee1eac0636bf,PodSandboxId:7940165f0057b45e3d32cc89cde399384640c76ed775ba4e21d198e99ee9f64b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701144164573598275,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37152287-4d4b-45db-a357-1468fc210bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc09e7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03135efda90532612c38c0353c67b59d1316a9173bb00c795a5437b198f81aa0,PodSandboxId:e56c380450d2377f41e918a7fd14471071a2ca2defeeeaaf2cf87db1e72faf24,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701144163821646144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kqgf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c63dad72-b046-4f33-b851-8ca60c237dd7,},Annotations:map[string]string{io.kubernetes.container.hash: b4feeb0b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:510892e048714ae0c99171fbd0aac85698eaa61741069edb01085a22bdcc9ac2,PodSandboxId:86e73488f5313d0ee2ebde20476582937e1ca9cef523aad278cdaa8028a9c846,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701144140625840021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6618416a9d62cdf9f0f3c0e83b58685f,},Annotations:map[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6544adf0def62fe77964c6e9e5b7c3b3e91408bed82ccc7ab9c53d397c9f769a,PodSandboxId:b6b101c20ad2a682a5607a93453575695f3e165aeac889d56cdc20ffd730a153,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701144140310038318,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35a8e2360c9fa006fd620573f15a218,},Annotations:map
[string]string{io.kubernetes.container.hash: 1a4b2524,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf7aa04e4dffc949886bb6d2b41dcb22da8affefee453973bf3ab390bef6943,PodSandboxId:7587ef7ab319935a952a759c1d4cf83b358408573ddeb4d4f7de916100d42941,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701144140137449225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5749f01db5a8e0f1bb15715c6
2c91664,},Annotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db05ce5a1b14eda3cf86a731b007a7a0315f0b4e6e0049f18b063f74a9fb9b7,PodSandboxId:a8f4db4a98220bdc2ff2b384292d6a434af6764be032721ce3cba474609b18f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701144139927862984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d58547c5472ad0261e3309d4e4dda4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 347dae6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=66bd97c9-4449-48cf-b7d5-72ef87269cf3 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.098806321Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ff6cf629-7a5c-4867-9c56-3f4797f00229 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.098888344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ff6cf629-7a5c-4867-9c56-3f4797f00229 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.099893807Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=cf38d53f-f8e2-4d03-a37a-ec97d5930b99 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.100211865Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145051100201112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=cf38d53f-f8e2-4d03-a37a-ec97d5930b99 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.100647187Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=14fb96fc-f265-49e3-b6f7-f5d1c4cb5454 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.100690751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=14fb96fc-f265-49e3-b6f7-f5d1c4cb5454 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:17:31 no-preload-222348 crio[717]: time="2023-11-28 04:17:31.100919491Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1957c12842b67439cf0fd2c8e6621ba2313b2ed1176bd562fcdfe9ca237e80b3,PodSandboxId:df72b36aadcf86ce69f73f311171efc2a1b4f48c3464932afba203d12db583f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:19704ecb8a22fb777f438422b7f638673596735ee0223499327597aebef1072e,State:CONTAINER_RUNNING,CreatedAt:1701144164680918480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-2cf7h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bcbbfab4-753c-4925-9154-27a19052567a,},Annotations:map[string]string{io.kubernetes.container.hash: 7c242387,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:850958f2fb6eb8d5bc32a7fe0b9286cf09a1787673ed0cc9dd96ee1eac0636bf,PodSandboxId:7940165f0057b45e3d32cc89cde399384640c76ed775ba4e21d198e99ee9f64b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1701144164573598275,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37152287-4d4b-45db-a357-1468fc210bfc,},Annotations:map[string]string{io.kubernetes.container.hash: 5cc09e7f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:03135efda90532612c38c0353c67b59d1316a9173bb00c795a5437b198f81aa0,PodSandboxId:e56c380450d2377f41e918a7fd14471071a2ca2defeeeaaf2cf87db1e72faf24,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1701144163821646144,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-kqgf5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c63dad72-b046-4f33-b851-8ca60c237dd7,},Annotations:map[string]string{io.kubernetes.container.hash: b4feeb0b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:510892e048714ae0c99171fbd0aac85698eaa61741069edb01085a22bdcc9ac2,PodSandboxId:86e73488f5313d0ee2ebde20476582937e1ca9cef523aad278cdaa8028a9c846,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:45ece34cbcc6c82c13e0e535245454d071df5a3b78b23eb779c1b6b9ab3602d2,State:CONTAINER_RUNNING,CreatedAt:1701144140625840021,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
6618416a9d62cdf9f0f3c0e83b58685f,},Annotations:map[string]string{io.kubernetes.container.hash: ee29696d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6544adf0def62fe77964c6e9e5b7c3b3e91408bed82ccc7ab9c53d397c9f769a,PodSandboxId:b6b101c20ad2a682a5607a93453575695f3e165aeac889d56cdc20ffd730a153,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1701144140310038318,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35a8e2360c9fa006fd620573f15a218,},Annotations:map
[string]string{io.kubernetes.container.hash: 1a4b2524,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf7aa04e4dffc949886bb6d2b41dcb22da8affefee453973bf3ab390bef6943,PodSandboxId:7587ef7ab319935a952a759c1d4cf83b358408573ddeb4d4f7de916100d42941,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:0fbe1bf4175a8c9b7428f845038392769805f82a277f34ee0bfa3d893b7fe9f5,State:CONTAINER_RUNNING,CreatedAt:1701144140137449225,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5749f01db5a8e0f1bb15715c6
2c91664,},Annotations:map[string]string{io.kubernetes.container.hash: 97159cab,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3db05ce5a1b14eda3cf86a731b007a7a0315f0b4e6e0049f18b063f74a9fb9b7,PodSandboxId:a8f4db4a98220bdc2ff2b384292d6a434af6764be032721ce3cba474609b18f7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:8691a74e5237be5a787cea07aefa76290f24bfac5c6b7a07469172fef09305c6,State:CONTAINER_RUNNING,CreatedAt:1701144139927862984,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-222348,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0d58547c5472ad0261e3309d4e4dda4,},A
nnotations:map[string]string{io.kubernetes.container.hash: 347dae6b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=14fb96fc-f265-49e3-b6f7-f5d1c4cb5454 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1957c12842b67       df157df72acec03850dc8700e790c40c2bc004a984f17dcd73a380cec7986c55   14 minutes ago      Running             kube-proxy                0                   df72b36aadcf8       kube-proxy-2cf7h
	850958f2fb6eb       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   7940165f0057b       storage-provisioner
	03135efda9053       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   e56c380450d23       coredns-76f75df574-kqgf5
	510892e048714       4c269eaa91e8d5ec4a9e21be01cd65a72f316e6761e3bb12c791487f435cfde9   15 minutes ago      Running             kube-scheduler            2                   86e73488f5313       kube-scheduler-no-preload-222348
	6544adf0def62       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   15 minutes ago      Running             etcd                      2                   b6b101c20ad2a       etcd-no-preload-222348
	7cf7aa04e4dff       e8d5e880f29508e1f6f67d519fff73cd0b1e51916644c70ae46a55c2b10508a4   15 minutes ago      Running             kube-controller-manager   2                   7587ef7ab3199       kube-controller-manager-no-preload-222348
	3db05ce5a1b14       e5d4aeafd7b819ed1ac4213c43ed75833dc0f0996f676ba2ef21e6d506bc4eb7   15 minutes ago      Running             kube-apiserver            2                   a8f4db4a98220       kube-apiserver-no-preload-222348
	
	* 
	* ==> coredns [03135efda90532612c38c0353c67b59d1316a9173bb00c795a5437b198f81aa0] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:56943 - 39680 "HINFO IN 3995391236530009408.8397340726467799273. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036086635s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-222348
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-222348
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=no-preload-222348
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T04_02_28_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 04:02:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-222348
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 04:17:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 04:13:01 +0000   Tue, 28 Nov 2023 04:02:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 04:13:01 +0000   Tue, 28 Nov 2023 04:02:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 04:13:01 +0000   Tue, 28 Nov 2023 04:02:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 04:13:01 +0000   Tue, 28 Nov 2023 04:02:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    no-preload-222348
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a49fea7b2a8c47519b7ba0d73fbaad30
	  System UUID:                a49fea7b-2a8c-4751-9b7b-a0d73fbaad30
	  Boot ID:                    b22808a2-5e4c-467c-b657-05f3e0a0861b
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.0
	  Kube-Proxy Version:         v1.29.0-rc.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-kqgf5                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-222348                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-no-preload-222348             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-no-preload-222348    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-2cf7h                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-222348             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-kl8k4              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node no-preload-222348 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node no-preload-222348 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node no-preload-222348 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node no-preload-222348 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node no-preload-222348 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node no-preload-222348 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node no-preload-222348 event: Registered Node no-preload-222348 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov28 03:56] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.080691] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.493450] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.466422] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139852] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.439098] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Nov28 03:57] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.120544] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.140641] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.122158] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.255896] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +30.732579] systemd-fstab-generator[1331]: Ignoring "noauto" for root device
	[ +20.868516] kauditd_printk_skb: 29 callbacks suppressed
	[Nov28 04:02] systemd-fstab-generator[3952]: Ignoring "noauto" for root device
	[  +9.790081] systemd-fstab-generator[4279]: Ignoring "noauto" for root device
	[ +13.282813] kauditd_printk_skb: 2 callbacks suppressed
	[  +8.313267] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [6544adf0def62fe77964c6e9e5b7c3b3e91408bed82ccc7ab9c53d397c9f769a] <==
	* {"level":"info","ts":"2023-11-28T04:02:23.604006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgPreVoteResp from 133f99d1dc1797cc at term 1"}
	{"level":"info","ts":"2023-11-28T04:02:23.604023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became candidate at term 2"}
	{"level":"info","ts":"2023-11-28T04:02:23.604032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc received MsgVoteResp from 133f99d1dc1797cc at term 2"}
	{"level":"info","ts":"2023-11-28T04:02:23.604043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"133f99d1dc1797cc became leader at term 2"}
	{"level":"info","ts":"2023-11-28T04:02:23.604054Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 133f99d1dc1797cc elected leader 133f99d1dc1797cc at term 2"}
	{"level":"info","ts":"2023-11-28T04:02:23.605849Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:02:23.606902Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"133f99d1dc1797cc","local-member-attributes":"{Name:no-preload-222348 ClientURLs:[https://192.168.39.106:2379]}","request-path":"/0/members/133f99d1dc1797cc/attributes","cluster-id":"db63b0e3647a827","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-28T04:02:23.607156Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T04:02:23.607575Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"db63b0e3647a827","local-member-id":"133f99d1dc1797cc","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:02:23.60768Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:02:23.607873Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:02:23.607927Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T04:02:23.609907Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T04:02:23.609957Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-28T04:02:23.610037Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T04:02:23.611538Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.106:2379"}
	{"level":"info","ts":"2023-11-28T04:07:31.512819Z","caller":"traceutil/trace.go:171","msg":"trace[482128868] transaction","detail":"{read_only:false; response_revision:730; number_of_response:1; }","duration":"227.376137ms","start":"2023-11-28T04:07:31.285284Z","end":"2023-11-28T04:07:31.51266Z","steps":["trace[482128868] 'process raft request'  (duration: 226.559304ms)"],"step_count":1}
	{"level":"warn","ts":"2023-11-28T04:07:31.754533Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"133.358376ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-11-28T04:07:31.75483Z","caller":"traceutil/trace.go:171","msg":"trace[1868309745] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:730; }","duration":"133.782256ms","start":"2023-11-28T04:07:31.62102Z","end":"2023-11-28T04:07:31.754802Z","steps":["trace[1868309745] 'range keys from in-memory index tree'  (duration: 133.213828ms)"],"step_count":1}
	{"level":"info","ts":"2023-11-28T04:12:23.64144Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":724}
	{"level":"info","ts":"2023-11-28T04:12:23.644972Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":724,"took":"2.44448ms","hash":2600799354}
	{"level":"info","ts":"2023-11-28T04:12:23.645097Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2600799354,"revision":724,"compact-revision":-1}
	{"level":"info","ts":"2023-11-28T04:17:23.649671Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":966}
	{"level":"info","ts":"2023-11-28T04:17:23.655627Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":966,"took":"5.04028ms","hash":1270754143}
	{"level":"info","ts":"2023-11-28T04:17:23.65579Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1270754143,"revision":966,"compact-revision":724}
	
	* 
	* ==> kernel <==
	*  04:17:31 up 20 min,  0 users,  load average: 0.10, 0.17, 0.23
	Linux no-preload-222348 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [3db05ce5a1b14eda3cf86a731b007a7a0315f0b4e6e0049f18b063f74a9fb9b7] <==
	* I1128 04:12:26.087477       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:13:26.087523       1 handler_proxy.go:93] no RequestInfo found in the context
	W1128 04:13:26.087595       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:13:26.087820       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:13:26.087994       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1128 04:13:26.088030       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:13:26.089629       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:15:26.088288       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:15:26.088386       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:15:26.088401       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:15:26.090932       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:15:26.090999       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:15:26.091006       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:17:25.093188       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:17:25.093347       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1128 04:17:26.094089       1 handler_proxy.go:93] no RequestInfo found in the context
	W1128 04:17:26.094176       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:17:26.095668       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:17:26.095824       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1128 04:17:26.095877       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:17:26.097172       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [7cf7aa04e4dffc949886bb6d2b41dcb22da8affefee453973bf3ab390bef6943] <==
	* I1128 04:11:40.773361       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:12:10.284252       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:12:10.784497       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:12:40.291208       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:12:40.793195       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:13:10.297016       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:13:10.802651       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:13:40.304064       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:13:40.811290       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1128 04:13:43.662858       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="359.424µs"
	I1128 04:13:58.666159       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="195.226µs"
	E1128 04:14:10.312776       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:14:10.820537       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:14:40.319254       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:14:40.829366       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:15:10.326364       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:15:10.838569       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:15:40.333322       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:15:40.848129       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:16:10.339994       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:16:10.858265       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:16:40.346344       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:16:40.868418       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:17:10.352500       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:17:10.876640       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [1957c12842b67439cf0fd2c8e6621ba2313b2ed1176bd562fcdfe9ca237e80b3] <==
	* I1128 04:02:45.031087       1 server_others.go:72] "Using iptables proxy"
	I1128 04:02:45.054305       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	I1128 04:02:45.102085       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I1128 04:02:45.102161       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 04:02:45.102204       1 server_others.go:168] "Using iptables Proxier"
	I1128 04:02:45.105947       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 04:02:45.106202       1 server.go:865] "Version info" version="v1.29.0-rc.0"
	I1128 04:02:45.106251       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 04:02:45.107304       1 config.go:188] "Starting service config controller"
	I1128 04:02:45.107358       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 04:02:45.107390       1 config.go:97] "Starting endpoint slice config controller"
	I1128 04:02:45.107407       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 04:02:45.109636       1 config.go:315] "Starting node config controller"
	I1128 04:02:45.109678       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 04:02:45.208456       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1128 04:02:45.208587       1 shared_informer.go:318] Caches are synced for service config
	I1128 04:02:45.210028       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [510892e048714ae0c99171fbd0aac85698eaa61741069edb01085a22bdcc9ac2] <==
	* W1128 04:02:25.972683       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 04:02:25.972797       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1128 04:02:26.001091       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 04:02:26.001228       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1128 04:02:26.105121       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 04:02:26.105216       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1128 04:02:26.154812       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 04:02:26.154979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1128 04:02:26.203193       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 04:02:26.203291       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1128 04:02:26.274042       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 04:02:26.274150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1128 04:02:26.301635       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 04:02:26.301804       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1128 04:02:26.315044       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 04:02:26.315160       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1128 04:02:26.336899       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1128 04:02:26.337034       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1128 04:02:26.375984       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1128 04:02:26.376077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1128 04:02:26.405027       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 04:02:26.405117       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1128 04:02:26.578548       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 04:02:26.578658       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1128 04:02:29.201072       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 03:56:57 UTC, ends at Tue 2023-11-28 04:17:31 UTC. --
	Nov 28 04:14:52 no-preload-222348 kubelet[4286]: E1128 04:14:52.645031    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:15:06 no-preload-222348 kubelet[4286]: E1128 04:15:06.646291    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:15:17 no-preload-222348 kubelet[4286]: E1128 04:15:17.642946    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:15:28 no-preload-222348 kubelet[4286]: E1128 04:15:28.666458    4286 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:15:28 no-preload-222348 kubelet[4286]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:15:28 no-preload-222348 kubelet[4286]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:15:28 no-preload-222348 kubelet[4286]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:15:32 no-preload-222348 kubelet[4286]: E1128 04:15:32.644104    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:15:46 no-preload-222348 kubelet[4286]: E1128 04:15:46.643402    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:16:01 no-preload-222348 kubelet[4286]: E1128 04:16:01.643427    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:16:15 no-preload-222348 kubelet[4286]: E1128 04:16:15.643385    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:16:28 no-preload-222348 kubelet[4286]: E1128 04:16:28.664540    4286 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:16:28 no-preload-222348 kubelet[4286]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:16:28 no-preload-222348 kubelet[4286]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:16:28 no-preload-222348 kubelet[4286]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:16:29 no-preload-222348 kubelet[4286]: E1128 04:16:29.643792    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:16:42 no-preload-222348 kubelet[4286]: E1128 04:16:42.646090    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:16:55 no-preload-222348 kubelet[4286]: E1128 04:16:55.643242    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:17:07 no-preload-222348 kubelet[4286]: E1128 04:17:07.643642    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:17:18 no-preload-222348 kubelet[4286]: E1128 04:17:18.643635    4286 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kl8k4" podUID="de5f6e30-71af-4043-86de-11d878cc86c2"
	Nov 28 04:17:28 no-preload-222348 kubelet[4286]: E1128 04:17:28.664803    4286 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:17:28 no-preload-222348 kubelet[4286]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:17:28 no-preload-222348 kubelet[4286]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:17:28 no-preload-222348 kubelet[4286]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:17:28 no-preload-222348 kubelet[4286]: E1128 04:17:28.716552    4286 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	
	* 
	* ==> storage-provisioner [850958f2fb6eb8d5bc32a7fe0b9286cf09a1787673ed0cc9dd96ee1eac0636bf] <==
	* I1128 04:02:44.843305       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 04:02:44.899093       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 04:02:44.899188       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 04:02:44.914589       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 04:02:44.915687       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-222348_45a8d0c4-2d08-4313-84e0-658422aad263!
	I1128 04:02:44.915394       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ac131277-f0b2-4398-b830-9b6c80a229fd", APIVersion:"v1", ResourceVersion:"458", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-222348_45a8d0c4-2d08-4313-84e0-658422aad263 became leader
	I1128 04:02:45.017072       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-222348_45a8d0c4-2d08-4313-84e0-658422aad263!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-222348 -n no-preload-222348
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-222348 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-kl8k4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-222348 describe pod metrics-server-57f55c9bc5-kl8k4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-222348 describe pod metrics-server-57f55c9bc5-kl8k4: exit status 1 (67.403825ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-kl8k4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-222348 describe pod metrics-server-57f55c9bc5-kl8k4: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (342.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1128 04:12:55.196179  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 04:13:34.223099  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 04:13:43.673850  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-672176 -n embed-certs-672176
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-11-28 04:21:50.1007915 +0000 UTC m=+6057.275765652
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-672176 -n embed-certs-672176
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-672176 logs -n 25
E1128 04:21:50.535524  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/old-k8s-version-666657/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-672176 logs -n 25: (1.282759131s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-725962  | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-666657             | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-666657                              | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC | 28 Nov 23 04:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-644411                  | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-644411 --memory=2200 --alsologtostderr   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 03:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-222348                  | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-725962       | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-644411 sudo                              | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p                                                     | disable-driver-mounts-846967 | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | disable-driver-mounts-846967                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-672176            | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC | 28 Nov 23 03:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-672176                 | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC | 28 Nov 23 04:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-666657                              | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 04:16 UTC | 28 Nov 23 04:16 UTC |
	| delete  | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 04:17 UTC | 28 Nov 23 04:17 UTC |
	| delete  | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 04:17 UTC | 28 Nov 23 04:17 UTC |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:02:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:02:20.007599  388252 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:02:20.007767  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.007777  388252 out.go:309] Setting ErrFile to fd 2...
	I1128 04:02:20.007785  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.008096  388252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 04:02:20.008843  388252 out.go:303] Setting JSON to false
	I1128 04:02:20.010310  388252 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9890,"bootTime":1701134250,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 04:02:20.010407  388252 start.go:138] virtualization: kvm guest
	I1128 04:02:20.013087  388252 out.go:177] * [embed-certs-672176] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 04:02:20.014598  388252 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:02:20.014660  388252 notify.go:220] Checking for updates...
	I1128 04:02:20.015986  388252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:02:20.017211  388252 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:20.018519  388252 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 04:02:20.019955  388252 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 04:02:20.021210  388252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:02:20.023191  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:02:20.023899  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.023964  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.042617  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
	I1128 04:02:20.043095  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.043705  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.043736  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.044107  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.044324  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.044601  388252 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:02:20.044913  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.044954  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.060572  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I1128 04:02:20.061089  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.061641  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.061662  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.062005  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.062271  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.099905  388252 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 04:02:20.101319  388252 start.go:298] selected driver: kvm2
	I1128 04:02:20.101341  388252 start.go:902] validating driver "kvm2" against &{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.101493  388252 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:02:20.102582  388252 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.102689  388252 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 04:02:20.119550  388252 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 04:02:20.120061  388252 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 04:02:20.120161  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:02:20.120182  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:20.120200  388252 start_flags.go:323] config:
	{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.120453  388252 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.122000  388252 out.go:177] * Starting control plane node embed-certs-672176 in cluster embed-certs-672176
	I1128 04:02:20.123169  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:02:20.123226  388252 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 04:02:20.123238  388252 cache.go:56] Caching tarball of preloaded images
	I1128 04:02:20.123336  388252 preload.go:174] Found /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 04:02:20.123349  388252 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 04:02:20.123483  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:02:20.123764  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:02:20.123841  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 53.317µs
	I1128 04:02:20.123861  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:02:20.123898  388252 fix.go:54] fixHost starting: 
	I1128 04:02:20.124308  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.124355  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.139372  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I1128 04:02:20.139973  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.140502  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.140524  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.141047  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.141273  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.141507  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:02:20.143177  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Running err=<nil>
	W1128 04:02:20.143200  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:02:20.144930  388252 out.go:177] * Updating the running kvm2 "embed-certs-672176" VM ...
	I1128 04:02:17.125019  385277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:17.142364  385277 api_server.go:72] duration metric: took 4m14.849353437s to wait for apiserver process to appear ...
	I1128 04:02:17.142392  385277 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:17.142425  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:17.142480  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:17.183951  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:17.183975  385277 cri.go:89] found id: ""
	I1128 04:02:17.183984  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:17.184035  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.188897  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:17.188968  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:17.224077  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:17.224105  385277 cri.go:89] found id: ""
	I1128 04:02:17.224115  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:17.224171  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.228613  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:17.228693  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:17.263866  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:17.263895  385277 cri.go:89] found id: ""
	I1128 04:02:17.263906  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:17.263973  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.268122  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:17.268187  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:17.311145  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.311176  385277 cri.go:89] found id: ""
	I1128 04:02:17.311185  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:17.311245  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.315277  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:17.315355  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:17.352737  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:17.352763  385277 cri.go:89] found id: ""
	I1128 04:02:17.352773  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:17.352839  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.357033  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:17.357117  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:17.394844  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:17.394880  385277 cri.go:89] found id: ""
	I1128 04:02:17.394892  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:17.394949  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.399309  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:17.399382  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:17.441719  385277 cri.go:89] found id: ""
	I1128 04:02:17.441755  385277 logs.go:284] 0 containers: []
	W1128 04:02:17.441763  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:17.441769  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:17.441821  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:17.485353  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:17.485378  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:17.485383  385277 cri.go:89] found id: ""
	I1128 04:02:17.485391  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:17.485445  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.489781  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.493710  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:17.493734  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:17.552558  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:17.552596  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:17.570454  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:17.570484  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.617817  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:17.617855  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:18.071032  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:18.071076  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:18.188437  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:18.188477  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:18.246729  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:18.246777  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:18.287299  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:18.287345  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:18.324855  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:18.324903  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:18.378328  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:18.378370  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:18.421332  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:18.421375  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:18.467856  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:18.467905  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:18.528763  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:18.528817  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:19.035039  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:21.037085  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:23.535684  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:20.146477  388252 machine.go:88] provisioning docker machine ...
	I1128 04:02:20.146512  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.146758  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.146926  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:02:20.146949  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.147164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:02:20.150346  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.150885  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 04:58:10 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:02:20.150920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.151194  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:02:20.151404  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151768  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:02:20.151998  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:02:20.152482  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:02:20.152501  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:02:23.005224  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:21.087291  385277 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8444/healthz ...
	I1128 04:02:21.094451  385277 api_server.go:279] https://192.168.61.13:8444/healthz returned 200:
	ok
	I1128 04:02:21.096308  385277 api_server.go:141] control plane version: v1.28.4
	I1128 04:02:21.096333  385277 api_server.go:131] duration metric: took 3.953933505s to wait for apiserver health ...
	I1128 04:02:21.096343  385277 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:21.096371  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:21.096431  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:21.144869  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:21.144908  385277 cri.go:89] found id: ""
	I1128 04:02:21.144920  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:21.144987  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.149714  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:21.149790  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:21.192196  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.192230  385277 cri.go:89] found id: ""
	I1128 04:02:21.192242  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:21.192307  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.196964  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:21.197040  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:21.234749  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:21.234775  385277 cri.go:89] found id: ""
	I1128 04:02:21.234785  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:21.234845  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.239486  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:21.239574  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:21.275950  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.275980  385277 cri.go:89] found id: ""
	I1128 04:02:21.275991  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:21.276069  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.280518  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:21.280591  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:21.325941  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:21.325967  385277 cri.go:89] found id: ""
	I1128 04:02:21.325977  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:21.326038  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.330959  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:21.331031  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:21.376605  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.376636  385277 cri.go:89] found id: ""
	I1128 04:02:21.376648  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:21.376717  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.382609  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:21.382686  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:21.434065  385277 cri.go:89] found id: ""
	I1128 04:02:21.434102  385277 logs.go:284] 0 containers: []
	W1128 04:02:21.434113  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:21.434121  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:21.434191  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:21.475230  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.475265  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.475272  385277 cri.go:89] found id: ""
	I1128 04:02:21.475300  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:21.475367  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.479918  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.483989  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:21.484014  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.550040  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:21.550086  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.604802  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:21.604854  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:21.667187  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:21.667230  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:21.735542  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:21.735591  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.778554  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:21.778600  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.841737  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:21.841776  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.885454  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:21.885494  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:22.264498  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:22.264545  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:22.281694  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:22.281727  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:22.441500  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:22.441548  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:22.516971  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:22.517015  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:22.570642  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:22.570676  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:25.123556  385277 system_pods.go:59] 8 kube-system pods found
	I1128 04:02:25.123590  385277 system_pods.go:61] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.123595  385277 system_pods.go:61] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.123600  385277 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.123604  385277 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.123608  385277 system_pods.go:61] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.123613  385277 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.123620  385277 system_pods.go:61] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.123626  385277 system_pods.go:61] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.123633  385277 system_pods.go:74] duration metric: took 4.027284696s to wait for pod list to return data ...
	I1128 04:02:25.123641  385277 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:25.127575  385277 default_sa.go:45] found service account: "default"
	I1128 04:02:25.127601  385277 default_sa.go:55] duration metric: took 3.954108ms for default service account to be created ...
	I1128 04:02:25.127611  385277 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:25.136183  385277 system_pods.go:86] 8 kube-system pods found
	I1128 04:02:25.136217  385277 system_pods.go:89] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.136224  385277 system_pods.go:89] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.136232  385277 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.136240  385277 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.136246  385277 system_pods.go:89] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.136253  385277 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.136266  385277 system_pods.go:89] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.136280  385277 system_pods.go:89] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.136291  385277 system_pods.go:126] duration metric: took 8.673655ms to wait for k8s-apps to be running ...
	I1128 04:02:25.136303  385277 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:25.136362  385277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:25.158811  385277 system_svc.go:56] duration metric: took 22.495299ms WaitForService to wait for kubelet.
	I1128 04:02:25.158862  385277 kubeadm.go:581] duration metric: took 4m22.865858856s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:25.158891  385277 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:25.162679  385277 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:25.162706  385277 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:25.162717  385277 node_conditions.go:105] duration metric: took 3.821419ms to run NodePressure ...
	I1128 04:02:25.162745  385277 start.go:228] waiting for startup goroutines ...
	I1128 04:02:25.162751  385277 start.go:233] waiting for cluster config update ...
	I1128 04:02:25.162760  385277 start.go:242] writing updated cluster config ...
	I1128 04:02:25.163075  385277 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:25.217545  385277 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:02:25.219820  385277 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-725962" cluster and "default" namespace by default
	I1128 04:02:28.624093  385190 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.0
	I1128 04:02:28.624173  385190 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:02:28.624301  385190 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:02:28.624444  385190 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:02:28.624561  385190 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:02:28.624641  385190 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:02:28.626365  385190 out.go:204]   - Generating certificates and keys ...
	I1128 04:02:28.626465  385190 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:02:28.626548  385190 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:02:28.626645  385190 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:02:28.626719  385190 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:02:28.626828  385190 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:02:28.626908  385190 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:02:28.626985  385190 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:02:28.627057  385190 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:02:28.627166  385190 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:02:28.627259  385190 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:02:28.627315  385190 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:02:28.627384  385190 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:02:28.627442  385190 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:02:28.627513  385190 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1128 04:02:28.627573  385190 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:02:28.627653  385190 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:02:28.627717  385190 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:02:28.627821  385190 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:02:28.627901  385190 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:02:28.629387  385190 out.go:204]   - Booting up control plane ...
	I1128 04:02:28.629496  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:02:28.629593  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:02:28.629701  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:02:28.629825  385190 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:02:28.629933  385190 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:02:28.629985  385190 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:02:28.630182  385190 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:02:28.630292  385190 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502940 seconds
	I1128 04:02:28.630437  385190 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:02:28.630586  385190 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:02:28.630656  385190 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:02:28.630869  385190 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-222348 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:02:28.630937  385190 kubeadm.go:322] [bootstrap-token] Using token: 7e8qc3.nnytwd8q8fl84l6i
	I1128 04:02:28.632838  385190 out.go:204]   - Configuring RBAC rules ...
	I1128 04:02:28.632987  385190 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:02:28.633108  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:02:28.633273  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:02:28.633455  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:02:28.633635  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:02:28.633737  385190 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:02:28.633909  385190 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:02:28.633964  385190 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:02:28.634003  385190 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:02:28.634009  385190 kubeadm.go:322] 
	I1128 04:02:28.634063  385190 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:02:28.634070  385190 kubeadm.go:322] 
	I1128 04:02:28.634130  385190 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:02:28.634136  385190 kubeadm.go:322] 
	I1128 04:02:28.634157  385190 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:02:28.634205  385190 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:02:28.634250  385190 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:02:28.634256  385190 kubeadm.go:322] 
	I1128 04:02:28.634333  385190 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:02:28.634349  385190 kubeadm.go:322] 
	I1128 04:02:28.634438  385190 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:02:28.634462  385190 kubeadm.go:322] 
	I1128 04:02:28.634525  385190 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:02:28.634659  385190 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:02:28.634759  385190 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:02:28.634773  385190 kubeadm.go:322] 
	I1128 04:02:28.634879  385190 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:02:28.634957  385190 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:02:28.634965  385190 kubeadm.go:322] 
	I1128 04:02:28.635041  385190 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635153  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:02:28.635188  385190 kubeadm.go:322] 	--control-plane 
	I1128 04:02:28.635197  385190 kubeadm.go:322] 
	I1128 04:02:28.635304  385190 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:02:28.635313  385190 kubeadm.go:322] 
	I1128 04:02:28.635411  385190 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635541  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:02:28.635574  385190 cni.go:84] Creating CNI manager for ""
	I1128 04:02:28.635588  385190 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:28.637435  385190 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:02:28.638928  385190 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:02:25.536491  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:28.037478  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:26.077199  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:28.654704  385190 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:02:28.714435  385190 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:02:28.714516  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.714524  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=no-preload-222348 minikube.k8s.io/updated_at=2023_11_28T04_02_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.790761  385190 ops.go:34] apiserver oom_adj: -16
	I1128 04:02:28.965788  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.082351  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.680586  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.181037  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.680560  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.181252  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.680411  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.180401  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.681195  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:33.180867  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.535026  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.536808  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.161184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:33.680538  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.180615  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.680359  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.180746  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.681099  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.180588  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.681059  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.180397  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.680629  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:38.180710  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.036694  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:37.535611  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:35.229145  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:38.681268  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.180491  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.680634  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.180761  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.681057  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.180983  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.309439  385190 kubeadm.go:1081] duration metric: took 12.594981015s to wait for elevateKubeSystemPrivileges.
	I1128 04:02:41.309479  385190 kubeadm.go:406] StartCluster complete in 5m13.943228432s
	I1128 04:02:41.309503  385190 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.309588  385190 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:41.311897  385190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.312215  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:02:41.312322  385190 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:02:41.312407  385190 addons.go:69] Setting storage-provisioner=true in profile "no-preload-222348"
	I1128 04:02:41.312422  385190 addons.go:69] Setting default-storageclass=true in profile "no-preload-222348"
	I1128 04:02:41.312436  385190 addons.go:231] Setting addon storage-provisioner=true in "no-preload-222348"
	I1128 04:02:41.312438  385190 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-222348"
	W1128 04:02:41.312445  385190 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:02:41.312446  385190 addons.go:69] Setting metrics-server=true in profile "no-preload-222348"
	I1128 04:02:41.312462  385190 config.go:182] Loaded profile config "no-preload-222348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 04:02:41.312475  385190 addons.go:231] Setting addon metrics-server=true in "no-preload-222348"
	W1128 04:02:41.312485  385190 addons.go:240] addon metrics-server should already be in state true
	I1128 04:02:41.312510  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312537  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312960  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312985  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.328695  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I1128 04:02:41.328709  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44013
	I1128 04:02:41.328795  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I1128 04:02:41.332632  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332652  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332640  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.333191  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333213  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333323  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333340  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333358  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333344  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333610  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333774  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333826  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.334168  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334182  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.334399  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.334587  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334602  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.338095  385190 addons.go:231] Setting addon default-storageclass=true in "no-preload-222348"
	W1128 04:02:41.338117  385190 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:02:41.338150  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.338562  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.338582  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.351757  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I1128 04:02:41.352462  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.353001  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.353018  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.353432  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.353689  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.354246  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1128 04:02:41.354837  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.355324  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.355342  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.355772  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.356535  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.356577  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.356832  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33321
	I1128 04:02:41.357390  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.357499  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.359297  385190 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:02:41.357865  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.360511  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.360704  385190 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.360715  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:02:41.360729  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.361075  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.361268  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.363830  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.365783  385190 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:02:41.364607  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.365384  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.367315  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:02:41.367328  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:02:41.367348  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.367398  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.367414  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.367426  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.368068  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.368272  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.370196  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370716  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.370740  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370820  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.371038  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.371144  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.371280  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.374445  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1128 04:02:41.374734  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.375079  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.375089  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.375305  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.375403  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.376672  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.376916  385190 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.376931  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:02:41.376944  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.379448  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379800  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.379839  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379946  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.380070  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.380154  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.380223  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.388696  385190 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-222348" context rescaled to 1 replicas
	I1128 04:02:41.388733  385190 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:02:41.390613  385190 out.go:177] * Verifying Kubernetes components...
	I1128 04:02:41.391975  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:41.644941  385190 node_ready.go:35] waiting up to 6m0s for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.645100  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:02:41.665031  385190 node_ready.go:49] node "no-preload-222348" has status "Ready":"True"
	I1128 04:02:41.665067  385190 node_ready.go:38] duration metric: took 20.088639ms waiting for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.665082  385190 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:41.682673  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:41.759560  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:02:41.759595  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:02:41.905887  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.922496  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.955296  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:02:41.955331  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:02:42.013986  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.014023  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:02:42.104936  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.373507  385190 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 04:02:43.023075  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117131952s)
	I1128 04:02:43.023099  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.100573063s)
	I1128 04:02:43.023137  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023153  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023217  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023235  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023471  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023491  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023502  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023510  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023615  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023659  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023682  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023704  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023724  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023704  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023898  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023917  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.116124  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.116162  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.116591  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.116636  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.116648  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.309617  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.204630924s)
	I1128 04:02:43.309676  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.309689  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310010  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310031  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310043  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.310051  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310313  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310331  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310343  385190 addons.go:467] Verifying addon metrics-server=true in "no-preload-222348"
	I1128 04:02:43.312005  385190 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 04:02:43.313519  385190 addons.go:502] enable addons completed in 2.001198411s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 04:02:39.536572  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:42.036107  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:41.309196  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:44.385117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:43.735794  385190 pod_ready.go:102] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:45.228427  385190 pod_ready.go:92] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.228457  385190 pod_ready.go:81] duration metric: took 3.545740844s waiting for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.228470  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234714  385190 pod_ready.go:92] pod "coredns-76f75df574-nxnkf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.234747  385190 pod_ready.go:81] duration metric: took 6.268663ms waiting for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234767  385190 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240363  385190 pod_ready.go:92] pod "etcd-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.240386  385190 pod_ready.go:81] duration metric: took 5.606452ms waiting for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240397  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245748  385190 pod_ready.go:92] pod "kube-apiserver-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.245774  385190 pod_ready.go:81] duration metric: took 5.367922ms waiting for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245786  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251475  385190 pod_ready.go:92] pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.251498  385190 pod_ready.go:81] duration metric: took 5.703821ms waiting for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251506  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050247  385190 pod_ready.go:92] pod "kube-proxy-2cf7h" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.050276  385190 pod_ready.go:81] duration metric: took 798.763018ms waiting for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050285  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448834  385190 pod_ready.go:92] pod "kube-scheduler-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.448860  385190 pod_ready.go:81] duration metric: took 398.568611ms waiting for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448867  385190 pod_ready.go:38] duration metric: took 4.783773086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:46.448903  385190 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:02:46.448956  385190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:46.462941  385190 api_server.go:72] duration metric: took 5.074163925s to wait for apiserver process to appear ...
	I1128 04:02:46.463051  385190 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:46.463074  385190 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I1128 04:02:46.467657  385190 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I1128 04:02:46.468866  385190 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 04:02:46.468903  385190 api_server.go:131] duration metric: took 5.843376ms to wait for apiserver health ...
	I1128 04:02:46.468913  385190 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:46.655554  385190 system_pods.go:59] 9 kube-system pods found
	I1128 04:02:46.655587  385190 system_pods.go:61] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:46.655591  385190 system_pods.go:61] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:46.655595  385190 system_pods.go:61] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:46.655600  385190 system_pods.go:61] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:46.655605  385190 system_pods.go:61] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:46.655608  385190 system_pods.go:61] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:46.655612  385190 system_pods.go:61] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:46.655619  385190 system_pods.go:61] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:46.655623  385190 system_pods.go:61] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:46.655631  385190 system_pods.go:74] duration metric: took 186.709524ms to wait for pod list to return data ...
	I1128 04:02:46.655640  385190 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:46.849175  385190 default_sa.go:45] found service account: "default"
	I1128 04:02:46.849211  385190 default_sa.go:55] duration metric: took 193.561736ms for default service account to be created ...
	I1128 04:02:46.849224  385190 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:47.053165  385190 system_pods.go:86] 9 kube-system pods found
	I1128 04:02:47.053196  385190 system_pods.go:89] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:47.053202  385190 system_pods.go:89] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:47.053206  385190 system_pods.go:89] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:47.053210  385190 system_pods.go:89] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:47.053215  385190 system_pods.go:89] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:47.053219  385190 system_pods.go:89] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:47.053223  385190 system_pods.go:89] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:47.053230  385190 system_pods.go:89] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:47.053234  385190 system_pods.go:89] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:47.053244  385190 system_pods.go:126] duration metric: took 204.014035ms to wait for k8s-apps to be running ...
	I1128 04:02:47.053258  385190 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:47.053305  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:47.067411  385190 system_svc.go:56] duration metric: took 14.14274ms WaitForService to wait for kubelet.
	I1128 04:02:47.067436  385190 kubeadm.go:581] duration metric: took 5.678670521s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:47.067453  385190 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:47.249281  385190 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:47.249314  385190 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:47.249327  385190 node_conditions.go:105] duration metric: took 181.869484ms to run NodePressure ...
	I1128 04:02:47.249343  385190 start.go:228] waiting for startup goroutines ...
	I1128 04:02:47.249351  385190 start.go:233] waiting for cluster config update ...
	I1128 04:02:47.249363  385190 start.go:242] writing updated cluster config ...
	I1128 04:02:47.249683  385190 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:47.301859  385190 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.0 (minor skew: 1)
	I1128 04:02:47.304215  385190 out.go:177] * Done! kubectl is now configured to use "no-preload-222348" cluster and "default" namespace by default
	I1128 04:02:44.036258  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:46.535320  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:49.035723  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:51.036414  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.538606  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.501130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:56.038018  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:58.038148  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:56.573082  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:00.535454  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.536429  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.657139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:05.035677  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:07.535352  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:05.725166  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:10.035343  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:11.229133  384793 pod_ready.go:81] duration metric: took 4m0.000747713s waiting for pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:11.229186  384793 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 04:03:11.229223  384793 pod_ready.go:38] duration metric: took 4m1.198355321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:11.229295  384793 kubeadm.go:640] restartCluster took 5m7.227749733s
	W1128 04:03:11.229381  384793 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 04:03:11.229418  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 04:03:11.809110  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:14.877214  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:17.718633  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.489183339s)
	I1128 04:03:17.718715  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:17.739229  384793 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:03:17.757193  384793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:03:17.767831  384793 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:03:17.767891  384793 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1128 04:03:17.992007  384793 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:03:20.961191  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:24.033147  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:31.044187  384793 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1128 04:03:31.044276  384793 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:03:31.044375  384793 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:03:31.044493  384793 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:03:31.044609  384793 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:03:31.044732  384793 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:03:31.044843  384793 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:03:31.044947  384793 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1128 04:03:31.045000  384793 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:03:31.046699  384793 out.go:204]   - Generating certificates and keys ...
	I1128 04:03:31.046809  384793 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:03:31.046903  384793 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:03:31.047016  384793 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:03:31.047101  384793 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:03:31.047160  384793 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:03:31.047208  384793 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:03:31.047264  384793 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:03:31.047314  384793 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:03:31.047377  384793 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:03:31.047482  384793 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:03:31.047529  384793 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:03:31.047578  384793 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:03:31.047620  384793 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:03:31.047694  384793 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:03:31.047788  384793 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:03:31.047884  384793 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:03:31.047988  384793 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:03:31.049345  384793 out.go:204]   - Booting up control plane ...
	I1128 04:03:31.049473  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:03:31.049569  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:03:31.049662  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:03:31.049788  384793 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:03:31.049994  384793 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:03:31.050107  384793 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503287 seconds
	I1128 04:03:31.050234  384793 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:03:31.050420  384793 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:03:31.050527  384793 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:03:31.050654  384793 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-666657 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1128 04:03:31.050713  384793 kubeadm.go:322] [bootstrap-token] Using token: gf7r1p.pbcguwte29lkqg9w
	I1128 04:03:31.052000  384793 out.go:204]   - Configuring RBAC rules ...
	I1128 04:03:31.052092  384793 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:03:31.052210  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:03:31.052320  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:03:31.052413  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:03:31.052483  384793 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:03:31.052536  384793 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:03:31.052597  384793 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:03:31.052606  384793 kubeadm.go:322] 
	I1128 04:03:31.052674  384793 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:03:31.052686  384793 kubeadm.go:322] 
	I1128 04:03:31.052781  384793 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:03:31.052797  384793 kubeadm.go:322] 
	I1128 04:03:31.052818  384793 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:03:31.052928  384793 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:03:31.052973  384793 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:03:31.052982  384793 kubeadm.go:322] 
	I1128 04:03:31.053023  384793 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:03:31.053088  384793 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:03:31.053143  384793 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:03:31.053150  384793 kubeadm.go:322] 
	I1128 04:03:31.053220  384793 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1128 04:03:31.053286  384793 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:03:31.053292  384793 kubeadm.go:322] 
	I1128 04:03:31.053381  384793 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053534  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:03:31.053573  384793 kubeadm.go:322]     --control-plane 	  
	I1128 04:03:31.053582  384793 kubeadm.go:322] 
	I1128 04:03:31.053693  384793 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:03:31.053705  384793 kubeadm.go:322] 
	I1128 04:03:31.053806  384793 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053946  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:03:31.053966  384793 cni.go:84] Creating CNI manager for ""
	I1128 04:03:31.053976  384793 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:03:31.055505  384793 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:03:31.057142  384793 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:03:31.079411  384793 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:03:31.115893  384793 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:03:31.115971  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.115980  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=old-k8s-version-666657 minikube.k8s.io/updated_at=2023_11_28T04_03_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.155887  384793 ops.go:34] apiserver oom_adj: -16
	I1128 04:03:31.372659  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.491129  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.099198  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.598840  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.099309  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.599526  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:30.109176  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:33.181170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:34.099192  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:34.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.098837  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.599080  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.098595  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.599209  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.099078  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.599225  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.099115  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.599148  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.261149  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:39.099036  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.599363  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.099099  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.598700  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.099170  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.599370  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.099044  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.098743  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.599233  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.333168  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:44.099079  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:44.598797  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.098959  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.598648  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.098995  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.301569  384793 kubeadm.go:1081] duration metric: took 15.185662789s to wait for elevateKubeSystemPrivileges.
	I1128 04:03:46.301619  384793 kubeadm.go:406] StartCluster complete in 5m42.369662329s
	I1128 04:03:46.301646  384793 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.301755  384793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:03:46.304463  384793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.304778  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:03:46.304778  384793 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:03:46.304867  384793 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304898  384793 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304910  384793 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-666657"
	I1128 04:03:46.304911  384793 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-666657"
	W1128 04:03:46.304920  384793 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:03:46.304927  384793 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-666657"
	W1128 04:03:46.304935  384793 addons.go:240] addon metrics-server should already be in state true
	I1128 04:03:46.304934  384793 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-666657"
	I1128 04:03:46.304987  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.304988  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.305001  384793 config.go:182] Loaded profile config "old-k8s-version-666657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 04:03:46.305394  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305427  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305454  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305429  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305395  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305694  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.322961  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33891
	I1128 04:03:46.322979  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I1128 04:03:46.323376  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323388  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323820  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I1128 04:03:46.323904  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.323916  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324071  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324086  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324273  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.324410  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324528  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324590  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.324704  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324711  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.325059  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.325278  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325304  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.325499  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325519  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.328349  384793 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-666657"
	W1128 04:03:46.328365  384793 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:03:46.328393  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.328731  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.328750  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.342280  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45973
	I1128 04:03:46.343025  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.343737  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.343759  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.344269  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.344492  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.345036  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39033
	I1128 04:03:46.345665  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.346273  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.346301  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.346384  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.348493  384793 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:03:46.346866  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.349948  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:03:46.349966  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:03:46.349989  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.350099  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.352330  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.352432  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36429
	I1128 04:03:46.354071  384793 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:03:46.352959  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.354459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355328  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.355358  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355480  384793 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.355501  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:03:46.355518  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.355216  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.355803  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.356414  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.356435  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.356917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.357018  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.357108  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.357738  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.357769  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.358467  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.358922  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.358946  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.359072  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.359282  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.359403  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.359610  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.373628  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1128 04:03:46.374105  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.374866  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.374895  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.375314  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.375548  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.377265  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.377561  384793 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.377582  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:03:46.377603  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.380459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.380834  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.380864  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.381016  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.381169  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.381359  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.381466  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.409792  384793 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-666657" context rescaled to 1 replicas
	I1128 04:03:46.409842  384793 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:03:46.411454  384793 out.go:177] * Verifying Kubernetes components...
	I1128 04:03:46.413194  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:46.586767  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.631269  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.634383  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:03:46.634407  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:03:46.666152  384793 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.666176  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:03:46.674225  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:03:46.674248  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:03:46.713431  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:46.713461  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:03:46.793657  384793 node_ready.go:49] node "old-k8s-version-666657" has status "Ready":"True"
	I1128 04:03:46.793685  384793 node_ready.go:38] duration metric: took 127.497314ms waiting for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.793695  384793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:46.793699  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:47.263395  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:47.404099  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404139  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404445  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404485  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.404487  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:47.404506  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404519  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404786  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404809  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434537  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.434567  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.434929  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.434986  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434965  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.447368  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.816042626s)
	I1128 04:03:48.447386  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.781104735s)
	I1128 04:03:48.447415  384793 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1128 04:03:48.447423  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.447803  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.447818  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.447828  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447836  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.448143  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.448144  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.448166  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.746828  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.953085214s)
	I1128 04:03:48.746898  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.746917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747352  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.747378  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.747396  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.747420  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.747437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747692  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.749007  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.749027  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.749045  384793 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-666657"
	I1128 04:03:48.750820  384793 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 04:03:48.417150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:48.752378  384793 addons.go:502] enable addons completed in 2.447603022s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 04:03:49.504435  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.973968  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.485111  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:53.973462  384793 pod_ready.go:92] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.973491  384793 pod_ready.go:81] duration metric: took 6.710064476s waiting for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.973504  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.975383  384793 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975413  384793 pod_ready.go:81] duration metric: took 1.901164ms waiting for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:53.975426  384793 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975437  384793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980213  384793 pod_ready.go:92] pod "kube-proxy-fpjnf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.980239  384793 pod_ready.go:81] duration metric: took 4.79365ms waiting for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980249  384793 pod_ready.go:38] duration metric: took 7.186544585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:53.980270  384793 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:03:53.980322  384793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:03:53.995392  384793 api_server.go:72] duration metric: took 7.585507425s to wait for apiserver process to appear ...
	I1128 04:03:53.995438  384793 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:03:53.995455  384793 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I1128 04:03:54.002840  384793 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I1128 04:03:54.003953  384793 api_server.go:141] control plane version: v1.16.0
	I1128 04:03:54.003972  384793 api_server.go:131] duration metric: took 8.527968ms to wait for apiserver health ...
	I1128 04:03:54.003980  384793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:03:54.008155  384793 system_pods.go:59] 4 kube-system pods found
	I1128 04:03:54.008179  384793 system_pods.go:61] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.008184  384793 system_pods.go:61] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.008192  384793 system_pods.go:61] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.008196  384793 system_pods.go:61] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.008202  384793 system_pods.go:74] duration metric: took 4.21636ms to wait for pod list to return data ...
	I1128 04:03:54.008209  384793 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:03:54.010577  384793 default_sa.go:45] found service account: "default"
	I1128 04:03:54.010597  384793 default_sa.go:55] duration metric: took 2.383201ms for default service account to be created ...
	I1128 04:03:54.010603  384793 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:03:54.014085  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.014107  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.014114  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.014121  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.014125  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.014142  384793 retry.go:31] will retry after 305.81254ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.325645  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.325690  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.325700  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.325711  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.325717  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.325737  384793 retry.go:31] will retry after 265.004483ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.596427  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.596465  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.596472  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.596483  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.596491  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.596515  384793 retry.go:31] will retry after 379.763313ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.981569  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.981599  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.981607  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.981617  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.981624  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.981646  384793 retry.go:31] will retry after 439.396023ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.426531  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.426560  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.426565  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.426572  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.426577  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.426593  384793 retry.go:31] will retry after 551.563469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.983013  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.983042  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.983048  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.983055  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.983060  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.983076  384793 retry.go:31] will retry after 647.414701ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:56.635207  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:56.635238  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:56.635243  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:56.635251  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:56.635256  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:56.635276  384793 retry.go:31] will retry after 1.037316769s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.678748  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:57.678791  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:57.678800  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:57.678810  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:57.678815  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:57.678836  384793 retry.go:31] will retry after 1.167348672s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.565155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:58.851584  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:58.851615  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:58.851621  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:58.851627  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:58.851632  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:58.851649  384793 retry.go:31] will retry after 1.37796567s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.235244  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:00.235270  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:00.235276  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:00.235282  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:00.235288  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:00.235313  384793 retry.go:31] will retry after 2.090359712s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:02.330947  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:02.330984  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:02.331002  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:02.331013  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:02.331020  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:02.331041  384793 retry.go:31] will retry after 2.451255186s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.637193  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:04.787969  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:04.787999  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:04.788004  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:04.788011  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:04.788016  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:04.788033  384793 retry.go:31] will retry after 2.859833817s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:07.653629  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:07.653661  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:07.653667  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:07.653674  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:07.653679  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:07.653697  384793 retry.go:31] will retry after 4.226694897s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:06.721130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:09.789162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:11.886456  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:11.886488  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:11.886496  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:11.886503  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:11.886508  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:11.886538  384793 retry.go:31] will retry after 4.177038986s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:16.069291  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:16.069324  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:16.069330  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:16.069336  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:16.069341  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:16.069359  384793 retry.go:31] will retry after 4.273733761s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:15.869195  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:18.945228  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:20.347960  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:20.347992  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:20.347998  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:20.348004  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:20.348009  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:20.348028  384793 retry.go:31] will retry after 6.790786839s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:27.147442  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:27.147481  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:27.147489  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:27.147496  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Pending
	I1128 04:04:27.147506  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:27.147513  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:27.147532  384793 retry.go:31] will retry after 7.530763623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:25.021154  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:28.093157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.177177  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.684745  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:34.684783  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:34.684792  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:34.684799  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:34.684807  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:34.684813  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:34.684835  384793 retry.go:31] will retry after 10.243202989s: missing components: etcd, kube-apiserver, kube-controller-manager
	I1128 04:04:37.245170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:43.325131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:44.935423  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:04:44.935456  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:44.935462  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:04:44.935469  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Pending
	I1128 04:04:44.935474  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Pending
	I1128 04:04:44.935480  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:44.935486  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:44.935493  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:44.935498  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:44.935517  384793 retry.go:31] will retry after 15.895769684s: missing components: kube-apiserver, kube-controller-manager
	I1128 04:04:46.397235  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:52.481117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:55.549226  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:00.839171  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:05:00.839203  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:05:00.839209  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:05:00.839213  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Running
	I1128 04:05:00.839217  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Running
	I1128 04:05:00.839221  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:05:00.839225  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:05:00.839231  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:05:00.839236  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:05:00.839245  384793 system_pods.go:126] duration metric: took 1m6.828635432s to wait for k8s-apps to be running ...
	I1128 04:05:00.839253  384793 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:05:00.839308  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:05:00.858602  384793 system_svc.go:56] duration metric: took 19.336447ms WaitForService to wait for kubelet.
	I1128 04:05:00.858640  384793 kubeadm.go:581] duration metric: took 1m14.448764188s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:05:00.858663  384793 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:05:00.862657  384793 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:05:00.862682  384793 node_conditions.go:123] node cpu capacity is 2
	I1128 04:05:00.862695  384793 node_conditions.go:105] duration metric: took 4.026622ms to run NodePressure ...
	I1128 04:05:00.862709  384793 start.go:228] waiting for startup goroutines ...
	I1128 04:05:00.862721  384793 start.go:233] waiting for cluster config update ...
	I1128 04:05:00.862736  384793 start.go:242] writing updated cluster config ...
	I1128 04:05:00.863037  384793 ssh_runner.go:195] Run: rm -f paused
	I1128 04:05:00.914674  384793 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1128 04:05:00.916795  384793 out.go:177] 
	W1128 04:05:00.918292  384793 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1128 04:05:00.919711  384793 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1128 04:05:00.921263  384793 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-666657" cluster and "default" namespace by default
	I1128 04:05:01.629125  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:04.701205  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:10.781216  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:13.853213  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:19.933127  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:23.005456  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:29.085157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:32.161103  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:38.237107  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:41.313150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:47.389244  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:50.461131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:56.541162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:59.613200  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:05.693144  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:08.765184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:14.845161  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:17.921139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:23.997190  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:27.069225  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:33.149188  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:36.221163  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:42.301167  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:45.373156  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:51.453155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:54.525189  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:57.526358  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:06:57.526408  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:06:57.528448  388252 machine.go:91] provisioned docker machine in 4m37.381939051s
	I1128 04:06:57.528492  388252 fix.go:56] fixHost completed within 4m37.404595738s
	I1128 04:06:57.528498  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 4m37.404645524s
	W1128 04:06:57.528514  388252 start.go:691] error starting host: provision: host is not running
	W1128 04:06:57.528751  388252 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1128 04:06:57.528762  388252 start.go:706] Will try again in 5 seconds ...
	I1128 04:07:02.528995  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:07:02.529144  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 79.815µs
	I1128 04:07:02.529172  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:07:02.529180  388252 fix.go:54] fixHost starting: 
	I1128 04:07:02.529654  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:07:02.529689  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:07:02.545443  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I1128 04:07:02.546041  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:07:02.546627  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:07:02.546657  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:07:02.547002  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:07:02.547202  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:02.547393  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:07:02.549209  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Stopped err=<nil>
	I1128 04:07:02.549234  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	W1128 04:07:02.549378  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:07:02.551250  388252 out.go:177] * Restarting existing kvm2 VM for "embed-certs-672176" ...
	I1128 04:07:02.552611  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Start
	I1128 04:07:02.552792  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring networks are active...
	I1128 04:07:02.553615  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network default is active
	I1128 04:07:02.553928  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network mk-embed-certs-672176 is active
	I1128 04:07:02.554371  388252 main.go:141] libmachine: (embed-certs-672176) Getting domain xml...
	I1128 04:07:02.555218  388252 main.go:141] libmachine: (embed-certs-672176) Creating domain...
	I1128 04:07:03.867073  388252 main.go:141] libmachine: (embed-certs-672176) Waiting to get IP...
	I1128 04:07:03.868115  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:03.868595  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:03.868706  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:03.868567  389161 retry.go:31] will retry after 306.367802ms: waiting for machine to come up
	I1128 04:07:04.176148  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.176727  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.176760  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.176665  389161 retry.go:31] will retry after 349.820346ms: waiting for machine to come up
	I1128 04:07:04.528319  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.528804  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.528830  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.528753  389161 retry.go:31] will retry after 434.816613ms: waiting for machine to come up
	I1128 04:07:04.965453  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.965931  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.965964  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.965859  389161 retry.go:31] will retry after 504.812349ms: waiting for machine to come up
	I1128 04:07:05.472644  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.473150  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.473181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.473089  389161 retry.go:31] will retry after 512.859795ms: waiting for machine to come up
	I1128 04:07:05.987622  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.988077  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.988101  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.988023  389161 retry.go:31] will retry after 578.673806ms: waiting for machine to come up
	I1128 04:07:06.568420  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:06.568923  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:06.568957  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:06.568863  389161 retry.go:31] will retry after 1.101477644s: waiting for machine to come up
	I1128 04:07:07.671698  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:07.672126  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:07.672156  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:07.672054  389161 retry.go:31] will retry after 1.379684082s: waiting for machine to come up
	I1128 04:07:09.053227  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:09.053918  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:09.053950  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:09.053851  389161 retry.go:31] will retry after 1.775284772s: waiting for machine to come up
	I1128 04:07:10.831571  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:10.832140  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:10.832177  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:10.832065  389161 retry.go:31] will retry after 2.005203426s: waiting for machine to come up
	I1128 04:07:12.838667  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:12.839159  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:12.839187  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:12.839113  389161 retry.go:31] will retry after 2.403192486s: waiting for machine to come up
	I1128 04:07:15.244005  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:15.244513  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:15.244553  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:15.244427  389161 retry.go:31] will retry after 2.329820043s: waiting for machine to come up
	I1128 04:07:17.576268  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:17.576707  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:17.576748  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:17.576652  389161 retry.go:31] will retry after 4.220303586s: waiting for machine to come up
	I1128 04:07:21.801976  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802441  388252 main.go:141] libmachine: (embed-certs-672176) Found IP for machine: 192.168.72.208
	I1128 04:07:21.802469  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has current primary IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802483  388252 main.go:141] libmachine: (embed-certs-672176) Reserving static IP address...
	I1128 04:07:21.802890  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.802920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | skip adding static IP to network mk-embed-certs-672176 - found existing host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"}
	I1128 04:07:21.802939  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Getting to WaitForSSH function...
	I1128 04:07:21.802955  388252 main.go:141] libmachine: (embed-certs-672176) Reserved static IP address: 192.168.72.208
	I1128 04:07:21.802967  388252 main.go:141] libmachine: (embed-certs-672176) Waiting for SSH to be available...
	I1128 04:07:21.805675  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806052  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.806086  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806212  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH client type: external
	I1128 04:07:21.806237  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH private key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa (-rw-------)
	I1128 04:07:21.806261  388252 main.go:141] libmachine: (embed-certs-672176) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 04:07:21.806272  388252 main.go:141] libmachine: (embed-certs-672176) DBG | About to run SSH command:
	I1128 04:07:21.806284  388252 main.go:141] libmachine: (embed-certs-672176) DBG | exit 0
	I1128 04:07:21.897047  388252 main.go:141] libmachine: (embed-certs-672176) DBG | SSH cmd err, output: <nil>: 
	I1128 04:07:21.897443  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetConfigRaw
	I1128 04:07:21.898164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:21.901014  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901421  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.901454  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901679  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:07:21.901872  388252 machine.go:88] provisioning docker machine ...
	I1128 04:07:21.901891  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:21.902121  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902304  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:07:21.902318  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902482  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:21.905282  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905757  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.905798  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905977  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:21.906187  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906383  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906565  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:21.906734  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:21.907224  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:21.907254  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:07:22.042525  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-672176
	
	I1128 04:07:22.042553  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.045516  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.045916  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.045961  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.046143  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.046353  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046526  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046676  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.046861  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.047186  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.047207  388252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-672176' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-672176/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-672176' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:07:22.179515  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:07:22.179552  388252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 04:07:22.179578  388252 buildroot.go:174] setting up certificates
	I1128 04:07:22.179591  388252 provision.go:83] configureAuth start
	I1128 04:07:22.179602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:22.179940  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:22.182782  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183167  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.183199  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183344  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.185770  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186158  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.186195  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186348  388252 provision.go:138] copyHostCerts
	I1128 04:07:22.186407  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 04:07:22.186418  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 04:07:22.186494  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 04:07:22.186609  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 04:07:22.186623  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 04:07:22.186658  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 04:07:22.186756  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 04:07:22.186772  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 04:07:22.186830  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 04:07:22.186915  388252 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.embed-certs-672176 san=[192.168.72.208 192.168.72.208 localhost 127.0.0.1 minikube embed-certs-672176]
	I1128 04:07:22.268178  388252 provision.go:172] copyRemoteCerts
	I1128 04:07:22.268250  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:07:22.268305  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.270816  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271152  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.271181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271382  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.271571  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.271730  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.271880  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.362340  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 04:07:22.387591  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 04:07:22.412169  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 04:07:22.437185  388252 provision.go:86] duration metric: configureAuth took 257.574597ms
	I1128 04:07:22.437223  388252 buildroot.go:189] setting minikube options for container-runtime
	I1128 04:07:22.437418  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:07:22.437496  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.440503  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.440937  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.440984  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.441148  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.441414  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441626  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441808  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.442043  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.442369  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.442386  388252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:07:22.778314  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:07:22.778344  388252 machine.go:91] provisioned docker machine in 876.457785ms
	I1128 04:07:22.778392  388252 start.go:300] post-start starting for "embed-certs-672176" (driver="kvm2")
	I1128 04:07:22.778413  388252 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:07:22.778463  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:22.778894  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:07:22.778934  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.781750  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782161  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.782203  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782336  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.782653  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.782870  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.783045  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.876530  388252 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:07:22.881442  388252 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 04:07:22.881472  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 04:07:22.881541  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 04:07:22.881618  388252 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 04:07:22.881701  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:07:22.891393  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:22.914734  388252 start.go:303] post-start completed in 136.316733ms
	I1128 04:07:22.914771  388252 fix.go:56] fixHost completed within 20.385588986s
	I1128 04:07:22.914800  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.917856  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918267  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.918301  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918449  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.918697  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.918898  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.919051  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.919230  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.919548  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.919561  388252 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 04:07:23.037790  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701144442.982632661
	
	I1128 04:07:23.037817  388252 fix.go:206] guest clock: 1701144442.982632661
	I1128 04:07:23.037828  388252 fix.go:219] Guest: 2023-11-28 04:07:22.982632661 +0000 UTC Remote: 2023-11-28 04:07:22.914776935 +0000 UTC m=+302.972189005 (delta=67.855726ms)
	I1128 04:07:23.037853  388252 fix.go:190] guest clock delta is within tolerance: 67.855726ms
	I1128 04:07:23.037860  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 20.508701455s
	I1128 04:07:23.037879  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.038196  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:23.040928  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041276  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.041309  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041473  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042009  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042217  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042315  388252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:07:23.042380  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.042447  388252 ssh_runner.go:195] Run: cat /version.json
	I1128 04:07:23.042479  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.045070  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045430  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.045459  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045478  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045634  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.045826  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.045987  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.045998  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.046020  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.046131  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.046197  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.046338  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.046455  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.046594  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.158653  388252 ssh_runner.go:195] Run: systemctl --version
	I1128 04:07:23.164496  388252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:07:23.313946  388252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 04:07:23.320220  388252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 04:07:23.320326  388252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:07:23.339262  388252 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 04:07:23.339296  388252 start.go:472] detecting cgroup driver to use...
	I1128 04:07:23.339401  388252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:07:23.352989  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:07:23.367735  388252 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:07:23.367797  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:07:23.382143  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:07:23.395983  388252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 04:07:23.513475  388252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:07:23.657449  388252 docker.go:219] disabling docker service ...
	I1128 04:07:23.657531  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:07:23.672662  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:07:23.685142  388252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:07:23.810404  388252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:07:23.929413  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:07:23.942971  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:07:23.961419  388252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 04:07:23.961493  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.971562  388252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 04:07:23.971643  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.981660  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.992472  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:24.002748  388252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 04:07:24.016234  388252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 04:07:24.025560  388252 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 04:07:24.025629  388252 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 04:07:24.039085  388252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 04:07:24.048324  388252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 04:07:24.160507  388252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 04:07:24.331205  388252 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 04:07:24.331292  388252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 04:07:24.336480  388252 start.go:540] Will wait 60s for crictl version
	I1128 04:07:24.336541  388252 ssh_runner.go:195] Run: which crictl
	I1128 04:07:24.341052  388252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 04:07:24.376784  388252 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 04:07:24.376910  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.425035  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.485230  388252 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 04:07:24.486822  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:24.490127  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490529  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:24.490558  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490733  388252 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1128 04:07:24.494881  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:24.510006  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:07:24.510097  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:24.549615  388252 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 04:07:24.549699  388252 ssh_runner.go:195] Run: which lz4
	I1128 04:07:24.554039  388252 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 04:07:24.558068  388252 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 04:07:24.558101  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 04:07:26.358503  388252 crio.go:444] Took 1.804493 seconds to copy over tarball
	I1128 04:07:26.358586  388252 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 04:07:29.679041  388252 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.320417818s)
	I1128 04:07:29.679072  388252 crio.go:451] Took 3.320535 seconds to extract the tarball
	I1128 04:07:29.679086  388252 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 04:07:29.723905  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:29.774544  388252 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 04:07:29.774574  388252 cache_images.go:84] Images are preloaded, skipping loading
	I1128 04:07:29.774683  388252 ssh_runner.go:195] Run: crio config
	I1128 04:07:29.841740  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:29.841767  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:29.841792  388252 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 04:07:29.841826  388252 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-672176 NodeName:embed-certs-672176 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 04:07:29.842004  388252 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-672176"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 04:07:29.842115  388252 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-672176 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 04:07:29.842184  388252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 04:07:29.854017  388252 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 04:07:29.854103  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 04:07:29.863871  388252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1128 04:07:29.880656  388252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 04:07:29.899138  388252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1128 04:07:29.919697  388252 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I1128 04:07:29.924087  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:29.936814  388252 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176 for IP: 192.168.72.208
	I1128 04:07:29.936851  388252 certs.go:190] acquiring lock for shared ca certs: {Name:mk57c0483467fb0022a439f1b546194ca653d1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:07:29.937053  388252 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key
	I1128 04:07:29.937097  388252 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key
	I1128 04:07:29.937198  388252 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/client.key
	I1128 04:07:29.937274  388252 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key.9e96c9f0
	I1128 04:07:29.937334  388252 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key
	I1128 04:07:29.937491  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem (1338 bytes)
	W1128 04:07:29.937524  388252 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515_empty.pem, impossibly tiny 0 bytes
	I1128 04:07:29.937535  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 04:07:29.937561  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem (1078 bytes)
	I1128 04:07:29.937586  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem (1123 bytes)
	I1128 04:07:29.937607  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem (1675 bytes)
	I1128 04:07:29.937698  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:29.938553  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 04:07:29.963444  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 04:07:29.988035  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 04:07:30.012981  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 04:07:30.219926  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 04:07:30.244077  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 04:07:30.268833  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 04:07:30.293921  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 04:07:30.322839  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /usr/share/ca-certificates/3405152.pem (1708 bytes)
	I1128 04:07:30.349783  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 04:07:30.374569  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem --> /usr/share/ca-certificates/340515.pem (1338 bytes)
	I1128 04:07:30.401804  388252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 04:07:30.420925  388252 ssh_runner.go:195] Run: openssl version
	I1128 04:07:30.427193  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3405152.pem && ln -fs /usr/share/ca-certificates/3405152.pem /etc/ssl/certs/3405152.pem"
	I1128 04:07:30.439369  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444359  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444455  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.451032  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3405152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 04:07:30.464110  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 04:07:30.477275  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483239  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483314  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.489884  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 04:07:30.501967  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340515.pem && ln -fs /usr/share/ca-certificates/340515.pem /etc/ssl/certs/340515.pem"
	I1128 04:07:30.514081  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519079  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519157  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.525194  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340515.pem /etc/ssl/certs/51391683.0"
	I1128 04:07:30.536594  388252 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 04:07:30.541041  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 04:07:30.547008  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 04:07:30.554317  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 04:07:30.561063  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 04:07:30.567355  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 04:07:30.573719  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 04:07:30.580010  388252 kubeadm.go:404] StartCluster: {Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:07:30.580166  388252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 04:07:30.580237  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:30.623908  388252 cri.go:89] found id: ""
	I1128 04:07:30.623980  388252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 04:07:30.635847  388252 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 04:07:30.635911  388252 kubeadm.go:636] restartCluster start
	I1128 04:07:30.635982  388252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 04:07:30.646523  388252 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.647648  388252 kubeconfig.go:92] found "embed-certs-672176" server: "https://192.168.72.208:8443"
	I1128 04:07:30.650037  388252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 04:07:30.660625  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.660703  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.674234  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.674258  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.674309  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.687276  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.188012  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.188122  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.201481  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.688057  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.688152  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.701564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.188188  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.201049  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.688113  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.688191  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.700824  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.187399  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.187517  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.200128  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.687562  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.687688  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.700564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.188276  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.188406  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.201686  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.688327  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.688426  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.701023  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.187672  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.187809  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.200598  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.688485  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.688565  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.701518  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.188131  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.188213  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.201708  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.688321  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.688430  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.701852  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.187395  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.187539  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.200267  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.688365  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.688447  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.701921  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.187456  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.187615  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.201388  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.687819  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.687933  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.700584  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.188195  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.201557  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.688192  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.688268  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.700990  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.187806  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:40.187918  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:40.201110  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.660853  388252 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 04:07:40.660908  388252 kubeadm.go:1128] stopping kube-system containers ...
	I1128 04:07:40.660926  388252 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 04:07:40.661008  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:40.706945  388252 cri.go:89] found id: ""
	I1128 04:07:40.707017  388252 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 04:07:40.724988  388252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:07:40.735077  388252 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:07:40.735165  388252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745110  388252 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745146  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:40.870777  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:41.851187  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.047008  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.129329  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.194986  388252 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:07:42.195074  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.210225  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.727622  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.227063  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.726928  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.227709  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.727790  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.756952  388252 api_server.go:72] duration metric: took 2.561964065s to wait for apiserver process to appear ...
	I1128 04:07:44.756989  388252 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:07:44.757011  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.757778  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:44.757838  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.758268  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:45.258785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.416741  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.416771  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.416785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.484252  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.484292  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.758607  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.765159  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:49.765189  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.258770  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.264464  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.264499  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.759164  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.765206  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.765246  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:51.258591  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:51.264758  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I1128 04:07:51.274077  388252 api_server.go:141] control plane version: v1.28.4
	I1128 04:07:51.274110  388252 api_server.go:131] duration metric: took 6.517112692s to wait for apiserver health ...
	I1128 04:07:51.274122  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:51.274130  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:51.276088  388252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:07:51.277582  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:07:51.302050  388252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:07:51.355400  388252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:07:51.371543  388252 system_pods.go:59] 8 kube-system pods found
	I1128 04:07:51.371592  388252 system_pods.go:61] "coredns-5dd5756b68-296l9" [a79e060e-b757-46b9-882e-5f065aed0f46] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 04:07:51.371605  388252 system_pods.go:61] "etcd-embed-certs-672176" [610938df-5b75-4fef-b632-19af73d74dab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 04:07:51.371623  388252 system_pods.go:61] "kube-apiserver-embed-certs-672176" [3e513b84-29f4-4285-aea3-963078fa9e74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 04:07:51.371633  388252 system_pods.go:61] "kube-controller-manager-embed-certs-672176" [6fb9a912-0c05-47d1-8420-26d0bbbe92c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 04:07:51.371640  388252 system_pods.go:61] "kube-proxy-4cvwh" [9882c0aa-5c66-4b53-8c8e-827c1cddaac5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 04:07:51.371652  388252 system_pods.go:61] "kube-scheduler-embed-certs-672176" [2d7c706d-f01b-4e80-ba35-8ef97f27faa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 04:07:51.371659  388252 system_pods.go:61] "metrics-server-57f55c9bc5-sbkpc" [ea558db5-2aab-4e1e-aa62-a4595172d108] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:07:51.371666  388252 system_pods.go:61] "storage-provisioner" [96737dd7-931e-4ac5-b662-c560a4b6642e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 04:07:51.371676  388252 system_pods.go:74] duration metric: took 16.247766ms to wait for pod list to return data ...
	I1128 04:07:51.371694  388252 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:07:51.376458  388252 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:07:51.376495  388252 node_conditions.go:123] node cpu capacity is 2
	I1128 04:07:51.376508  388252 node_conditions.go:105] duration metric: took 4.80925ms to run NodePressure ...
	I1128 04:07:51.376539  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:51.778110  388252 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 04:07:51.786916  388252 kubeadm.go:787] kubelet initialised
	I1128 04:07:51.787002  388252 kubeadm.go:788] duration metric: took 8.859672ms waiting for restarted kubelet to initialise ...
	I1128 04:07:51.787019  388252 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:07:51.799380  388252 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.807214  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807261  388252 pod_ready.go:81] duration metric: took 7.829357ms waiting for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.807274  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807299  388252 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.814516  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814550  388252 pod_ready.go:81] duration metric: took 7.235029ms waiting for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.814569  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814576  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.827729  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827759  388252 pod_ready.go:81] duration metric: took 13.172422ms waiting for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.827768  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827774  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:54.190842  388252 pod_ready.go:102] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:07:56.189656  388252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.189758  388252 pod_ready.go:81] duration metric: took 4.36196703s waiting for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.189779  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196462  388252 pod_ready.go:92] pod "kube-proxy-4cvwh" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.196503  388252 pod_ready.go:81] duration metric: took 6.707028ms waiting for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196517  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:58.590819  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:00.590953  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:02.595296  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:04.592801  388252 pod_ready.go:92] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:08:04.592826  388252 pod_ready.go:81] duration metric: took 8.396301174s waiting for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:04.592839  388252 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:06.618794  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:08.619204  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:11.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:13.618160  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:15.619404  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:17.620107  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:20.118789  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:22.119626  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:24.619088  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:26.619353  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:29.118548  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:31.118625  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:33.122964  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:35.620077  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:38.118800  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:40.618996  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:42.619252  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:45.118801  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:47.118987  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:49.619233  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:52.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:54.120044  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:56.619768  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:59.119321  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:01.119784  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:03.619289  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:06.119695  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:08.618767  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:10.620952  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:13.119086  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:15.121912  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:17.618200  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:19.619428  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:22.117316  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:24.118147  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:26.119945  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:28.619687  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:30.619772  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:33.118414  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:35.622173  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:38.118091  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:40.118723  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:42.119551  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:44.119931  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:46.619572  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:48.620898  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:51.118343  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:53.619215  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:56.119440  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:58.620299  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:01.118313  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:03.618615  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:05.619056  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:07.622475  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:10.117858  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:12.119468  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:14.619203  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:16.619540  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:19.118749  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:21.619618  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:23.620623  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:26.118183  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:28.118246  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:30.618282  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:33.117841  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:35.122904  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:37.619116  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:40.118304  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:42.618264  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:44.621653  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:47.119733  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:49.618284  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:51.619099  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:54.118728  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:56.121041  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:58.618237  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:00.619430  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:03.119263  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:05.619558  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:07.620571  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:10.117924  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:12.118001  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:14.119916  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:16.618621  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:18.620149  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:21.118296  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:23.118614  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:25.119100  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:27.120549  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:29.618264  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:32.119075  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:34.619939  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:37.119561  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:39.119896  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:41.617842  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:43.618594  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:45.618757  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:47.619342  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:49.623012  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:52.119438  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:54.121760  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:56.620252  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:59.120191  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:01.618305  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:03.619616  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:04.593067  388252 pod_ready.go:81] duration metric: took 4m0.000190987s waiting for pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace to be "Ready" ...
	E1128 04:12:04.593121  388252 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 04:12:04.593139  388252 pod_ready.go:38] duration metric: took 4m12.806107308s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:04.593168  388252 kubeadm.go:640] restartCluster took 4m33.957247441s
	W1128 04:12:04.593251  388252 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 04:12:04.593282  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 04:12:18.614553  388252 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.021224516s)
	I1128 04:12:18.614653  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:18.628836  388252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:12:18.640242  388252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:12:18.649879  388252 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:12:18.649930  388252 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 04:12:18.702438  388252 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 04:12:18.702606  388252 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:12:18.867279  388252 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:12:18.867400  388252 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:12:18.867534  388252 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:12:19.120397  388252 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:12:19.122246  388252 out.go:204]   - Generating certificates and keys ...
	I1128 04:12:19.122357  388252 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:12:19.122474  388252 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:12:19.122646  388252 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:12:19.122757  388252 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:12:19.122856  388252 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:12:19.122934  388252 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:12:19.123028  388252 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:12:19.123173  388252 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:12:19.123270  388252 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:12:19.123380  388252 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:12:19.123435  388252 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:12:19.123517  388252 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:12:19.397687  388252 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:12:19.545433  388252 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:12:19.753655  388252 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:12:19.867889  388252 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:12:19.868510  388252 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:12:19.873288  388252 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:12:19.875099  388252 out.go:204]   - Booting up control plane ...
	I1128 04:12:19.875243  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:12:19.875362  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:12:19.875447  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:12:19.890902  388252 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:12:19.891790  388252 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:12:19.891903  388252 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:12:20.033327  388252 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:12:28.539450  388252 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.505311 seconds
	I1128 04:12:28.539554  388252 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:12:28.556290  388252 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:12:29.115246  388252 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:12:29.115517  388252 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-672176 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:12:29.632584  388252 kubeadm.go:322] [bootstrap-token] Using token: fhdku8.6c57fpjso9w7rrxv
	I1128 04:12:29.634185  388252 out.go:204]   - Configuring RBAC rules ...
	I1128 04:12:29.634320  388252 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:12:29.640994  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:12:29.653566  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:12:29.660519  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:12:29.665018  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:12:29.677514  388252 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:12:29.691421  388252 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:12:29.939496  388252 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:12:30.049393  388252 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:12:30.049425  388252 kubeadm.go:322] 
	I1128 04:12:30.049538  388252 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:12:30.049559  388252 kubeadm.go:322] 
	I1128 04:12:30.049652  388252 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:12:30.049683  388252 kubeadm.go:322] 
	I1128 04:12:30.049721  388252 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:12:30.049806  388252 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:12:30.049876  388252 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:12:30.049884  388252 kubeadm.go:322] 
	I1128 04:12:30.049983  388252 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:12:30.050004  388252 kubeadm.go:322] 
	I1128 04:12:30.050076  388252 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:12:30.050088  388252 kubeadm.go:322] 
	I1128 04:12:30.050145  388252 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:12:30.050234  388252 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:12:30.050337  388252 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:12:30.050347  388252 kubeadm.go:322] 
	I1128 04:12:30.050444  388252 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:12:30.050532  388252 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:12:30.050539  388252 kubeadm.go:322] 
	I1128 04:12:30.050633  388252 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fhdku8.6c57fpjso9w7rrxv \
	I1128 04:12:30.050753  388252 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:12:30.050784  388252 kubeadm.go:322] 	--control-plane 
	I1128 04:12:30.050790  388252 kubeadm.go:322] 
	I1128 04:12:30.050888  388252 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:12:30.050898  388252 kubeadm.go:322] 
	I1128 04:12:30.050994  388252 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fhdku8.6c57fpjso9w7rrxv \
	I1128 04:12:30.051118  388252 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:12:30.051556  388252 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:12:30.051597  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:12:30.051611  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:12:30.053491  388252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:12:30.055147  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:12:30.088905  388252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:12:30.132297  388252 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:12:30.132365  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=embed-certs-672176 minikube.k8s.io/updated_at=2023_11_28T04_12_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.132370  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.459401  388252 ops.go:34] apiserver oom_adj: -16
	I1128 04:12:30.459555  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.568049  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:31.166991  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:31.666953  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:32.167174  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:32.666615  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:33.166464  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:33.667438  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:34.167422  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:34.666474  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:35.167309  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:35.667310  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:36.166896  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:36.667030  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:37.167265  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:37.667172  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:38.166893  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:38.667196  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:39.166889  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:39.667205  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:40.167112  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:40.667377  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:41.167422  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:41.666650  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:42.167425  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:42.308007  388252 kubeadm.go:1081] duration metric: took 12.175710221s to wait for elevateKubeSystemPrivileges.
	I1128 04:12:42.308051  388252 kubeadm.go:406] StartCluster complete in 5m11.728054603s
	I1128 04:12:42.308070  388252 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:12:42.308149  388252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:12:42.310104  388252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:12:42.310352  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:12:42.310440  388252 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:12:42.310557  388252 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-672176"
	I1128 04:12:42.310581  388252 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-672176"
	W1128 04:12:42.310588  388252 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:12:42.310601  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:12:42.310668  388252 addons.go:69] Setting default-storageclass=true in profile "embed-certs-672176"
	I1128 04:12:42.310684  388252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-672176"
	I1128 04:12:42.310698  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.311002  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311040  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.311081  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311113  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.311110  388252 addons.go:69] Setting metrics-server=true in profile "embed-certs-672176"
	I1128 04:12:42.311127  388252 addons.go:231] Setting addon metrics-server=true in "embed-certs-672176"
	W1128 04:12:42.311134  388252 addons.go:240] addon metrics-server should already be in state true
	I1128 04:12:42.311167  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.311539  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311584  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.328327  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I1128 04:12:42.328769  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.329061  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I1128 04:12:42.329541  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.329720  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.329731  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.329740  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1128 04:12:42.330179  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.330195  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.330193  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.330557  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.330572  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.330768  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.331035  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.331050  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.331073  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.331151  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.331476  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.332248  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.332359  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.334824  388252 addons.go:231] Setting addon default-storageclass=true in "embed-certs-672176"
	W1128 04:12:42.334849  388252 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:12:42.334882  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.335253  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.335333  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.352633  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40133
	I1128 04:12:42.353356  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.353736  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37797
	I1128 04:12:42.353967  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.353982  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.354364  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.354559  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.355670  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37125
	I1128 04:12:42.355716  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.356215  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.356764  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.356808  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.356772  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.356965  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.356984  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.359122  388252 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:12:42.357414  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.357431  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.360619  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:12:42.360666  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:12:42.360695  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.360632  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.360981  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.361031  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.362951  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.365190  388252 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:12:42.364654  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.365222  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.365254  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.365285  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.365431  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.367020  388252 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:12:42.367079  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:12:42.367146  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.367154  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.367365  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.370570  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.371152  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.371177  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.371181  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.371352  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.371712  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.371881  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.381549  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I1128 04:12:42.382167  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.382667  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.382726  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.383173  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.383387  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.384921  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.385265  388252 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:12:42.385284  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:12:42.385305  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.388576  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.389134  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.389197  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.389203  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.389439  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.389617  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.389783  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.513762  388252 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-672176" context rescaled to 1 replicas
	I1128 04:12:42.513815  388252 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:12:42.515768  388252 out.go:177] * Verifying Kubernetes components...
	I1128 04:12:42.517584  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:42.565623  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:12:42.565648  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:12:42.583220  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:12:42.591345  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:12:42.596578  388252 node_ready.go:35] waiting up to 6m0s for node "embed-certs-672176" to be "Ready" ...
	I1128 04:12:42.596679  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:12:42.615808  388252 node_ready.go:49] node "embed-certs-672176" has status "Ready":"True"
	I1128 04:12:42.615836  388252 node_ready.go:38] duration metric: took 19.228862ms waiting for node "embed-certs-672176" to be "Ready" ...
	I1128 04:12:42.615848  388252 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:42.637885  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:12:42.637913  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:12:42.667328  388252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:42.863842  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:12:42.863897  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:12:42.947911  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:12:44.507109  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.923846344s)
	I1128 04:12:44.507207  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.507227  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.507634  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.507655  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.507667  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.507677  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.509371  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:44.509455  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.509479  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.585867  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.585899  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.586220  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.586243  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.586371  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:44.829833  388252 pod_ready.go:102] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:45.125413  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.534026387s)
	I1128 04:12:45.125477  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.125492  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.125490  388252 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.528780545s)
	I1128 04:12:45.125516  388252 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1128 04:12:45.125839  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.125859  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.125874  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.125883  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.126171  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.126184  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.126201  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.429252  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.481263549s)
	I1128 04:12:45.429311  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.429327  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.429703  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.429772  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.429787  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.429797  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.429727  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.430078  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.430119  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.430135  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.430149  388252 addons.go:467] Verifying addon metrics-server=true in "embed-certs-672176"
	I1128 04:12:45.432135  388252 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 04:12:45.433222  388252 addons.go:502] enable addons completed in 3.122792003s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 04:12:46.830144  388252 pod_ready.go:102] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:47.831025  388252 pod_ready.go:92] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.831057  388252 pod_ready.go:81] duration metric: took 5.163697448s waiting for pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.831067  388252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.837379  388252 pod_ready.go:92] pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.837400  388252 pod_ready.go:81] duration metric: took 6.325699ms waiting for pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.837411  388252 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.842711  388252 pod_ready.go:92] pod "etcd-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.842736  388252 pod_ready.go:81] duration metric: took 5.316988ms waiting for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.842744  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.848771  388252 pod_ready.go:92] pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.848792  388252 pod_ready.go:81] duration metric: took 6.042201ms waiting for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.848801  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.854704  388252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.854729  388252 pod_ready.go:81] duration metric: took 5.922154ms waiting for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.854737  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7srf" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.227290  388252 pod_ready.go:92] pod "kube-proxy-q7srf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:48.227318  388252 pod_ready.go:81] duration metric: took 372.573682ms waiting for pod "kube-proxy-q7srf" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.227331  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.627054  388252 pod_ready.go:92] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:48.627088  388252 pod_ready.go:81] duration metric: took 399.749681ms waiting for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.627097  388252 pod_ready.go:38] duration metric: took 6.011238284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:48.627114  388252 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:12:48.627164  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:12:48.645283  388252 api_server.go:72] duration metric: took 6.131420029s to wait for apiserver process to appear ...
	I1128 04:12:48.645317  388252 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:12:48.645345  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:12:48.651616  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I1128 04:12:48.653231  388252 api_server.go:141] control plane version: v1.28.4
	I1128 04:12:48.653252  388252 api_server.go:131] duration metric: took 7.92759ms to wait for apiserver health ...
	I1128 04:12:48.653262  388252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:12:48.831400  388252 system_pods.go:59] 9 kube-system pods found
	I1128 04:12:48.831430  388252 system_pods.go:61] "coredns-5dd5756b68-48xtx" [1229f57f-a420-4c63-ae05-8a051f556bbd] Running
	I1128 04:12:48.831435  388252 system_pods.go:61] "coredns-5dd5756b68-qws7p" [19e86a95-23a4-4222-955d-9c560db64c80] Running
	I1128 04:12:48.831439  388252 system_pods.go:61] "etcd-embed-certs-672176" [6591bb2b-2d10-4f8b-9d1a-919b39590717] Running
	I1128 04:12:48.831443  388252 system_pods.go:61] "kube-apiserver-embed-certs-672176" [0ddbb8ba-804f-43ef-a803-62570732f165] Running
	I1128 04:12:48.831447  388252 system_pods.go:61] "kube-controller-manager-embed-certs-672176" [8dcb6ffa-1e26-420f-b385-e145cf24282a] Running
	I1128 04:12:48.831451  388252 system_pods.go:61] "kube-proxy-q7srf" [a2390c61-7f2a-40ac-ad4c-c47e78a3eb90] Running
	I1128 04:12:48.831454  388252 system_pods.go:61] "kube-scheduler-embed-certs-672176" [973e06dd-2716-40fe-99ed-cf7844cd22b7] Running
	I1128 04:12:48.831461  388252 system_pods.go:61] "metrics-server-57f55c9bc5-ppnxv" [1c86fe3d-4460-4777-a7d7-57b1f6aad5f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:12:48.831466  388252 system_pods.go:61] "storage-provisioner" [3304cb38-897a-482f-9a9d-9e287aca2ce4] Running
	I1128 04:12:48.831473  388252 system_pods.go:74] duration metric: took 178.206375ms to wait for pod list to return data ...
	I1128 04:12:48.831481  388252 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:12:49.027724  388252 default_sa.go:45] found service account: "default"
	I1128 04:12:49.027754  388252 default_sa.go:55] duration metric: took 196.266769ms for default service account to be created ...
	I1128 04:12:49.027762  388252 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:12:49.231633  388252 system_pods.go:86] 9 kube-system pods found
	I1128 04:12:49.231663  388252 system_pods.go:89] "coredns-5dd5756b68-48xtx" [1229f57f-a420-4c63-ae05-8a051f556bbd] Running
	I1128 04:12:49.231669  388252 system_pods.go:89] "coredns-5dd5756b68-qws7p" [19e86a95-23a4-4222-955d-9c560db64c80] Running
	I1128 04:12:49.231673  388252 system_pods.go:89] "etcd-embed-certs-672176" [6591bb2b-2d10-4f8b-9d1a-919b39590717] Running
	I1128 04:12:49.231677  388252 system_pods.go:89] "kube-apiserver-embed-certs-672176" [0ddbb8ba-804f-43ef-a803-62570732f165] Running
	I1128 04:12:49.231682  388252 system_pods.go:89] "kube-controller-manager-embed-certs-672176" [8dcb6ffa-1e26-420f-b385-e145cf24282a] Running
	I1128 04:12:49.231687  388252 system_pods.go:89] "kube-proxy-q7srf" [a2390c61-7f2a-40ac-ad4c-c47e78a3eb90] Running
	I1128 04:12:49.231691  388252 system_pods.go:89] "kube-scheduler-embed-certs-672176" [973e06dd-2716-40fe-99ed-cf7844cd22b7] Running
	I1128 04:12:49.231697  388252 system_pods.go:89] "metrics-server-57f55c9bc5-ppnxv" [1c86fe3d-4460-4777-a7d7-57b1f6aad5f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:12:49.231702  388252 system_pods.go:89] "storage-provisioner" [3304cb38-897a-482f-9a9d-9e287aca2ce4] Running
	I1128 04:12:49.231712  388252 system_pods.go:126] duration metric: took 203.944338ms to wait for k8s-apps to be running ...
	I1128 04:12:49.231724  388252 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:12:49.231781  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:49.247634  388252 system_svc.go:56] duration metric: took 15.898994ms WaitForService to wait for kubelet.
	I1128 04:12:49.247662  388252 kubeadm.go:581] duration metric: took 6.733807391s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:12:49.247681  388252 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:12:49.426882  388252 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:12:49.426916  388252 node_conditions.go:123] node cpu capacity is 2
	I1128 04:12:49.426931  388252 node_conditions.go:105] duration metric: took 179.246183ms to run NodePressure ...
	I1128 04:12:49.426946  388252 start.go:228] waiting for startup goroutines ...
	I1128 04:12:49.426954  388252 start.go:233] waiting for cluster config update ...
	I1128 04:12:49.426965  388252 start.go:242] writing updated cluster config ...
	I1128 04:12:49.427242  388252 ssh_runner.go:195] Run: rm -f paused
	I1128 04:12:49.477142  388252 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:12:49.479448  388252 out.go:177] * Done! kubectl is now configured to use "embed-certs-672176" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 04:07:14 UTC, ends at Tue 2023-11-28 04:21:51 UTC. --
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.867353083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145310867336617,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=f3c4c7a0-1ef4-495d-8455-e7d37a70bd50 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.868391571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d3d5fa5d-9ef0-4639-908b-2bee1012f151 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.868443649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d3d5fa5d-9ef0-4639-908b-2bee1012f151 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.868605537Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55999b180e46a76966250ba02a06f767d8185eb676c1d8bd4393a6ce89fa5cac,PodSandboxId:23bae9ebe757911e97d850acd1e83c87d549534a35aee4e2d685a72243ab09ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144766550473149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3304cb38-897a-482f-9a9d-9e287aca2ce4,},Annotations:map[string]string{io.kubernetes.container.hash: 445f6fd1,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e30da1c07f5793d557d04de279fd4d9ce1931f27c97b97b275f31a48143d89,PodSandboxId:543a0c27fed3c6d9bc8ea93968c8cb33660b8e04f1ea6753918bee7925a26551,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701144766312074488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q7srf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2390c61-7f2a-40ac-ad4c-c47e78a3eb90,},Annotations:map[string]string{io.kubernetes.container.hash: af22f2f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c92742b6cfdb9b2ac289826db97ae444b243c2f90a72e637600b1ef09a074a,PodSandboxId:dc2637252b7291cd0f933af06d585af6f4fa3933e0ee8e8657f4eea153fb8d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701144765021430249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-48xtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1229f57f-a420-4c63-ae05-8a051f556bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce06910,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3340b3a65d740678e203057d37b3555c365e618e9ace218331036d27fef381,PodSandboxId:38cde89d5ac454baad86c9b2291323e91c147e1a880a2147a02767061d1e5eea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701144742307516053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945ebe6e328796beac07f4450a6ecc1a,},An
notations:map[string]string{io.kubernetes.container.hash: 650dd1c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6169d1fa99a35f9b90bbfd581625b4927362b33dc636a21b075ff8d0e5c72173,PodSandboxId:50ac62d0c2bf7e2f7bcf0c38dd54f957695cb5c7a42fb4b3d8bdd3b576aac8ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701144741977084095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daba830fcaa1e18d3e7bb86bc4870c88,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01cf63ee243316a13bbd80f2ece1c3df00cfe1ae1c5b2bff1459399c59c67300,PodSandboxId:e703dda3489554ca5b519d5ac8ff7cc862d9f5810d7e7fb98641129ec1c19ca9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701144741617905673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570e83de020514716d22d1c764157ee0,},Annotations:map[string
]string{io.kubernetes.container.hash: 3659388e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81885b2b8dd1c8c624ec9132d877ac32987196c7adc4df1c1c3c3a35c6cc2f1,PodSandboxId:a332ada34261b1db808fad8aced9996a6e0b463007904aeecb477b26ff6e7572,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701144741426845203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fe46fb7a0db54841bf1ee918ac8f63
3,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d3d5fa5d-9ef0-4639-908b-2bee1012f151 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.907273933Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b7f8f995-cb5d-42a0-bbb9-2c8771052a7f name=/runtime.v1.RuntimeService/Version
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.907348178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b7f8f995-cb5d-42a0-bbb9-2c8771052a7f name=/runtime.v1.RuntimeService/Version
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.908710806Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0bba89b0-7ca8-4cde-8dd2-b920c8479900 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.909220911Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145310909204168,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0bba89b0-7ca8-4cde-8dd2-b920c8479900 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.909714948Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a4b07195-1da5-4754-92b6-9d1038d96fe5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.909796333Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a4b07195-1da5-4754-92b6-9d1038d96fe5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.909989253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55999b180e46a76966250ba02a06f767d8185eb676c1d8bd4393a6ce89fa5cac,PodSandboxId:23bae9ebe757911e97d850acd1e83c87d549534a35aee4e2d685a72243ab09ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144766550473149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3304cb38-897a-482f-9a9d-9e287aca2ce4,},Annotations:map[string]string{io.kubernetes.container.hash: 445f6fd1,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e30da1c07f5793d557d04de279fd4d9ce1931f27c97b97b275f31a48143d89,PodSandboxId:543a0c27fed3c6d9bc8ea93968c8cb33660b8e04f1ea6753918bee7925a26551,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701144766312074488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q7srf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2390c61-7f2a-40ac-ad4c-c47e78a3eb90,},Annotations:map[string]string{io.kubernetes.container.hash: af22f2f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c92742b6cfdb9b2ac289826db97ae444b243c2f90a72e637600b1ef09a074a,PodSandboxId:dc2637252b7291cd0f933af06d585af6f4fa3933e0ee8e8657f4eea153fb8d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701144765021430249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-48xtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1229f57f-a420-4c63-ae05-8a051f556bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce06910,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3340b3a65d740678e203057d37b3555c365e618e9ace218331036d27fef381,PodSandboxId:38cde89d5ac454baad86c9b2291323e91c147e1a880a2147a02767061d1e5eea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701144742307516053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945ebe6e328796beac07f4450a6ecc1a,},An
notations:map[string]string{io.kubernetes.container.hash: 650dd1c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6169d1fa99a35f9b90bbfd581625b4927362b33dc636a21b075ff8d0e5c72173,PodSandboxId:50ac62d0c2bf7e2f7bcf0c38dd54f957695cb5c7a42fb4b3d8bdd3b576aac8ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701144741977084095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daba830fcaa1e18d3e7bb86bc4870c88,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01cf63ee243316a13bbd80f2ece1c3df00cfe1ae1c5b2bff1459399c59c67300,PodSandboxId:e703dda3489554ca5b519d5ac8ff7cc862d9f5810d7e7fb98641129ec1c19ca9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701144741617905673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570e83de020514716d22d1c764157ee0,},Annotations:map[string
]string{io.kubernetes.container.hash: 3659388e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81885b2b8dd1c8c624ec9132d877ac32987196c7adc4df1c1c3c3a35c6cc2f1,PodSandboxId:a332ada34261b1db808fad8aced9996a6e0b463007904aeecb477b26ff6e7572,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701144741426845203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fe46fb7a0db54841bf1ee918ac8f63
3,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a4b07195-1da5-4754-92b6-9d1038d96fe5 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.958986661Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c7b0e606-3fa8-46b0-be68-c0b37bb25942 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.959135908Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c7b0e606-3fa8-46b0-be68-c0b37bb25942 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.960558479Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=129f66cc-ce50-4beb-928d-55d825701dba name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.960944238Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145310960930885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=129f66cc-ce50-4beb-928d-55d825701dba name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.961633085Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=41669655-cafd-4e2c-87cd-78ce1590b5b8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.961677649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=41669655-cafd-4e2c-87cd-78ce1590b5b8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.961820720Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55999b180e46a76966250ba02a06f767d8185eb676c1d8bd4393a6ce89fa5cac,PodSandboxId:23bae9ebe757911e97d850acd1e83c87d549534a35aee4e2d685a72243ab09ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144766550473149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3304cb38-897a-482f-9a9d-9e287aca2ce4,},Annotations:map[string]string{io.kubernetes.container.hash: 445f6fd1,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e30da1c07f5793d557d04de279fd4d9ce1931f27c97b97b275f31a48143d89,PodSandboxId:543a0c27fed3c6d9bc8ea93968c8cb33660b8e04f1ea6753918bee7925a26551,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701144766312074488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q7srf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2390c61-7f2a-40ac-ad4c-c47e78a3eb90,},Annotations:map[string]string{io.kubernetes.container.hash: af22f2f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c92742b6cfdb9b2ac289826db97ae444b243c2f90a72e637600b1ef09a074a,PodSandboxId:dc2637252b7291cd0f933af06d585af6f4fa3933e0ee8e8657f4eea153fb8d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701144765021430249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-48xtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1229f57f-a420-4c63-ae05-8a051f556bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce06910,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3340b3a65d740678e203057d37b3555c365e618e9ace218331036d27fef381,PodSandboxId:38cde89d5ac454baad86c9b2291323e91c147e1a880a2147a02767061d1e5eea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701144742307516053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945ebe6e328796beac07f4450a6ecc1a,},An
notations:map[string]string{io.kubernetes.container.hash: 650dd1c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6169d1fa99a35f9b90bbfd581625b4927362b33dc636a21b075ff8d0e5c72173,PodSandboxId:50ac62d0c2bf7e2f7bcf0c38dd54f957695cb5c7a42fb4b3d8bdd3b576aac8ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701144741977084095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daba830fcaa1e18d3e7bb86bc4870c88,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01cf63ee243316a13bbd80f2ece1c3df00cfe1ae1c5b2bff1459399c59c67300,PodSandboxId:e703dda3489554ca5b519d5ac8ff7cc862d9f5810d7e7fb98641129ec1c19ca9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701144741617905673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570e83de020514716d22d1c764157ee0,},Annotations:map[string
]string{io.kubernetes.container.hash: 3659388e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81885b2b8dd1c8c624ec9132d877ac32987196c7adc4df1c1c3c3a35c6cc2f1,PodSandboxId:a332ada34261b1db808fad8aced9996a6e0b463007904aeecb477b26ff6e7572,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701144741426845203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fe46fb7a0db54841bf1ee918ac8f63
3,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=41669655-cafd-4e2c-87cd-78ce1590b5b8 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.998765438Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=01378411-8564-4057-95dc-51c089a5068a name=/runtime.v1.RuntimeService/Version
	Nov 28 04:21:50 embed-certs-672176 crio[710]: time="2023-11-28 04:21:50.998828270Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=01378411-8564-4057-95dc-51c089a5068a name=/runtime.v1.RuntimeService/Version
	Nov 28 04:21:51 embed-certs-672176 crio[710]: time="2023-11-28 04:21:51.000360046Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=33f17873-bca6-41bb-a5e4-0ec81ce43217 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:21:51 embed-certs-672176 crio[710]: time="2023-11-28 04:21:51.000734568Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145311000723060,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=33f17873-bca6-41bb-a5e4-0ec81ce43217 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:21:51 embed-certs-672176 crio[710]: time="2023-11-28 04:21:51.001414219Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3f79b9ce-4fb2-4a85-9f5a-6cd5e824b51e name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:21:51 embed-certs-672176 crio[710]: time="2023-11-28 04:21:51.001462754Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3f79b9ce-4fb2-4a85-9f5a-6cd5e824b51e name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:21:51 embed-certs-672176 crio[710]: time="2023-11-28 04:21:51.001617916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55999b180e46a76966250ba02a06f767d8185eb676c1d8bd4393a6ce89fa5cac,PodSandboxId:23bae9ebe757911e97d850acd1e83c87d549534a35aee4e2d685a72243ab09ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144766550473149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3304cb38-897a-482f-9a9d-9e287aca2ce4,},Annotations:map[string]string{io.kubernetes.container.hash: 445f6fd1,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e30da1c07f5793d557d04de279fd4d9ce1931f27c97b97b275f31a48143d89,PodSandboxId:543a0c27fed3c6d9bc8ea93968c8cb33660b8e04f1ea6753918bee7925a26551,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701144766312074488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q7srf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2390c61-7f2a-40ac-ad4c-c47e78a3eb90,},Annotations:map[string]string{io.kubernetes.container.hash: af22f2f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c92742b6cfdb9b2ac289826db97ae444b243c2f90a72e637600b1ef09a074a,PodSandboxId:dc2637252b7291cd0f933af06d585af6f4fa3933e0ee8e8657f4eea153fb8d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701144765021430249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-48xtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1229f57f-a420-4c63-ae05-8a051f556bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce06910,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3340b3a65d740678e203057d37b3555c365e618e9ace218331036d27fef381,PodSandboxId:38cde89d5ac454baad86c9b2291323e91c147e1a880a2147a02767061d1e5eea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701144742307516053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945ebe6e328796beac07f4450a6ecc1a,},An
notations:map[string]string{io.kubernetes.container.hash: 650dd1c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6169d1fa99a35f9b90bbfd581625b4927362b33dc636a21b075ff8d0e5c72173,PodSandboxId:50ac62d0c2bf7e2f7bcf0c38dd54f957695cb5c7a42fb4b3d8bdd3b576aac8ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701144741977084095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daba830fcaa1e18d3e7bb86bc4870c88,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01cf63ee243316a13bbd80f2ece1c3df00cfe1ae1c5b2bff1459399c59c67300,PodSandboxId:e703dda3489554ca5b519d5ac8ff7cc862d9f5810d7e7fb98641129ec1c19ca9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701144741617905673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570e83de020514716d22d1c764157ee0,},Annotations:map[string
]string{io.kubernetes.container.hash: 3659388e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81885b2b8dd1c8c624ec9132d877ac32987196c7adc4df1c1c3c3a35c6cc2f1,PodSandboxId:a332ada34261b1db808fad8aced9996a6e0b463007904aeecb477b26ff6e7572,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701144741426845203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fe46fb7a0db54841bf1ee918ac8f63
3,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3f79b9ce-4fb2-4a85-9f5a-6cd5e824b51e name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	55999b180e46a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   23bae9ebe7579       storage-provisioner
	c4e30da1c07f5       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   543a0c27fed3c       kube-proxy-q7srf
	91c92742b6cfd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   dc2637252b729       coredns-5dd5756b68-48xtx
	fc3340b3a65d7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   38cde89d5ac45       etcd-embed-certs-672176
	6169d1fa99a35       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   50ac62d0c2bf7       kube-scheduler-embed-certs-672176
	01cf63ee24331       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   e703dda348955       kube-apiserver-embed-certs-672176
	f81885b2b8dd1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   a332ada34261b       kube-controller-manager-embed-certs-672176
	
	* 
	* ==> coredns [91c92742b6cfdb9b2ac289826db97ae444b243c2f90a72e637600b1ef09a074a] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:39443 - 58819 "HINFO IN 8585624031149724927.8956002308733853960. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029917572s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-672176
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-672176
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=embed-certs-672176
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T04_12_30_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 04:12:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-672176
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 04:21:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 04:17:56 +0000   Tue, 28 Nov 2023 04:12:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 04:17:56 +0000   Tue, 28 Nov 2023 04:12:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 04:17:56 +0000   Tue, 28 Nov 2023 04:12:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 04:17:56 +0000   Tue, 28 Nov 2023 04:12:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.208
	  Hostname:    embed-certs-672176
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a5cf2aae5a434ee495cec6b9bb579e26
	  System UUID:                a5cf2aae-5a43-4ee4-95ce-c6b9bb579e26
	  Boot ID:                    532f93ee-13ec-4e00-80cb-8b2b44b5a139
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-48xtx                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 etcd-embed-certs-672176                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-embed-certs-672176             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-controller-manager-embed-certs-672176    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-proxy-q7srf                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	  kube-system                 kube-scheduler-embed-certs-672176             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m21s
	  kube-system                 metrics-server-57f55c9bc5-ppnxv               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m6s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  Starting                 9m21s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s  kubelet          Node embed-certs-672176 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s  kubelet          Node embed-certs-672176 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s  kubelet          Node embed-certs-672176 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s  kubelet          Node embed-certs-672176 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m21s  kubelet          Node embed-certs-672176 status is now: NodeReady
	  Normal  RegisteredNode           9m10s  node-controller  Node embed-certs-672176 event: Registered Node embed-certs-672176 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov28 04:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069382] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.485292] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.676842] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.156986] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.685488] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.309157] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.123275] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.165877] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.126326] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.232700] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +17.873368] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[Nov28 04:08] kauditd_printk_skb: 29 callbacks suppressed
	[Nov28 04:12] systemd-fstab-generator[3499]: Ignoring "noauto" for root device
	[  +9.804360] systemd-fstab-generator[3825]: Ignoring "noauto" for root device
	[ +12.996425] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.656013] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [fc3340b3a65d740678e203057d37b3555c365e618e9ace218331036d27fef381] <==
	* {"level":"info","ts":"2023-11-28T04:12:23.497175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d89ba707b55b57db switched to configuration voters=(15608252585131857883)"}
	{"level":"info","ts":"2023-11-28T04:12:23.497316Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"390b9e353a6e0025","local-member-id":"d89ba707b55b57db","added-peer-id":"d89ba707b55b57db","added-peer-peer-urls":["https://192.168.72.208:2380"]}
	{"level":"info","ts":"2023-11-28T04:12:23.506835Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-11-28T04:12:23.507081Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.208:2380"}
	{"level":"info","ts":"2023-11-28T04:12:23.507241Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.208:2380"}
	{"level":"info","ts":"2023-11-28T04:12:23.511209Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d89ba707b55b57db","initial-advertise-peer-urls":["https://192.168.72.208:2380"],"listen-peer-urls":["https://192.168.72.208:2380"],"advertise-client-urls":["https://192.168.72.208:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.208:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-28T04:12:23.51342Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-28T04:12:23.916762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d89ba707b55b57db is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-28T04:12:23.916881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d89ba707b55b57db became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-28T04:12:23.916934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d89ba707b55b57db received MsgPreVoteResp from d89ba707b55b57db at term 1"}
	{"level":"info","ts":"2023-11-28T04:12:23.917232Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d89ba707b55b57db became candidate at term 2"}
	{"level":"info","ts":"2023-11-28T04:12:23.917263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d89ba707b55b57db received MsgVoteResp from d89ba707b55b57db at term 2"}
	{"level":"info","ts":"2023-11-28T04:12:23.917299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d89ba707b55b57db became leader at term 2"}
	{"level":"info","ts":"2023-11-28T04:12:23.917328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d89ba707b55b57db elected leader d89ba707b55b57db at term 2"}
	{"level":"info","ts":"2023-11-28T04:12:23.920349Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d89ba707b55b57db","local-member-attributes":"{Name:embed-certs-672176 ClientURLs:[https://192.168.72.208:2379]}","request-path":"/0/members/d89ba707b55b57db/attributes","cluster-id":"390b9e353a6e0025","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-28T04:12:23.920664Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:12:23.921171Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T04:12:23.924165Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T04:12:23.924226Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-28T04:12:23.924279Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"390b9e353a6e0025","local-member-id":"d89ba707b55b57db","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:12:23.924402Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:12:23.924443Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:12:23.924483Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T04:12:23.924814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T04:12:23.925577Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.208:2379"}
	
	* 
	* ==> kernel <==
	*  04:21:51 up 14 min,  0 users,  load average: 0.29, 0.25, 0.19
	Linux embed-certs-672176 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [01cf63ee243316a13bbd80f2ece1c3df00cfe1ae1c5b2bff1459399c59c67300] <==
	* W1128 04:17:27.389701       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:17:27.389729       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:17:27.389736       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:17:27.389773       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:17:27.389824       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:17:27.391103       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:18:26.240182       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 04:18:27.390585       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:18:27.390653       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:18:27.390666       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:18:27.391831       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:18:27.391976       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:18:27.392133       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:19:26.240209       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1128 04:20:26.240659       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 04:20:27.391821       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:20:27.392001       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:20:27.392144       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:20:27.392742       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:20:27.392933       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:20:27.394107       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:21:26.240289       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [f81885b2b8dd1c8c624ec9132d877ac32987196c7adc4df1c1c3c3a35c6cc2f1] <==
	* I1128 04:16:12.057880       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="101.26µs"
	E1128 04:16:41.460445       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:16:41.951384       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:17:11.467540       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:17:11.960669       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:17:41.474231       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:17:41.970204       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:18:11.479834       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:18:11.984479       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:18:41.485305       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:18:41.993457       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1128 04:18:52.055623       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="299.855µs"
	I1128 04:19:06.059258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="288.235µs"
	E1128 04:19:11.490435       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:19:12.002563       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:19:41.498192       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:19:42.011223       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:20:11.503883       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:20:12.021225       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:20:41.510009       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:20:42.029710       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:21:11.515307       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:21:12.042517       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:21:41.521464       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:21:42.051275       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [c4e30da1c07f5793d557d04de279fd4d9ce1931f27c97b97b275f31a48143d89] <==
	* I1128 04:12:46.803514       1 server_others.go:69] "Using iptables proxy"
	I1128 04:12:46.826148       1 node.go:141] Successfully retrieved node IP: 192.168.72.208
	I1128 04:12:46.885971       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1128 04:12:46.886067       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 04:12:46.889127       1 server_others.go:152] "Using iptables Proxier"
	I1128 04:12:46.889660       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 04:12:46.889900       1 server.go:846] "Version info" version="v1.28.4"
	I1128 04:12:46.889936       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 04:12:46.891869       1 config.go:188] "Starting service config controller"
	I1128 04:12:46.892907       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 04:12:46.893399       1 config.go:97] "Starting endpoint slice config controller"
	I1128 04:12:46.893439       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 04:12:46.895969       1 config.go:315] "Starting node config controller"
	I1128 04:12:46.896089       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 04:12:46.993905       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1128 04:12:46.993906       1 shared_informer.go:318] Caches are synced for service config
	I1128 04:12:46.996213       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [6169d1fa99a35f9b90bbfd581625b4927362b33dc636a21b075ff8d0e5c72173] <==
	* E1128 04:12:26.401724       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1128 04:12:26.401733       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 04:12:26.401740       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1128 04:12:26.401751       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 04:12:26.401760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 04:12:26.402392       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1128 04:12:26.408859       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 04:12:26.408938       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 04:12:27.309448       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 04:12:27.309553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1128 04:12:27.328746       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1128 04:12:27.328814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1128 04:12:27.372976       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 04:12:27.373099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1128 04:12:27.388730       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 04:12:27.388902       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1128 04:12:27.626879       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 04:12:27.626969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1128 04:12:27.635190       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 04:12:27.635273       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 04:12:27.697998       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 04:12:27.698296       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1128 04:12:27.723773       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1128 04:12:27.723857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1128 04:12:30.185112       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 04:07:14 UTC, ends at Tue 2023-11-28 04:21:51 UTC. --
	Nov 28 04:19:06 embed-certs-672176 kubelet[3832]: E1128 04:19:06.040489    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:19:19 embed-certs-672176 kubelet[3832]: E1128 04:19:19.038474    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:19:30 embed-certs-672176 kubelet[3832]: E1128 04:19:30.119263    3832 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:19:30 embed-certs-672176 kubelet[3832]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:19:30 embed-certs-672176 kubelet[3832]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:19:30 embed-certs-672176 kubelet[3832]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:19:33 embed-certs-672176 kubelet[3832]: E1128 04:19:33.039621    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:19:48 embed-certs-672176 kubelet[3832]: E1128 04:19:48.039610    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:20:02 embed-certs-672176 kubelet[3832]: E1128 04:20:02.041147    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:20:15 embed-certs-672176 kubelet[3832]: E1128 04:20:15.039140    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:20:29 embed-certs-672176 kubelet[3832]: E1128 04:20:29.039904    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:20:30 embed-certs-672176 kubelet[3832]: E1128 04:20:30.120166    3832 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:20:30 embed-certs-672176 kubelet[3832]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:20:30 embed-certs-672176 kubelet[3832]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:20:30 embed-certs-672176 kubelet[3832]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:20:44 embed-certs-672176 kubelet[3832]: E1128 04:20:44.038781    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:20:58 embed-certs-672176 kubelet[3832]: E1128 04:20:58.039550    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:21:13 embed-certs-672176 kubelet[3832]: E1128 04:21:13.039165    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:21:25 embed-certs-672176 kubelet[3832]: E1128 04:21:25.038926    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:21:30 embed-certs-672176 kubelet[3832]: E1128 04:21:30.121698    3832 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:21:30 embed-certs-672176 kubelet[3832]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:21:30 embed-certs-672176 kubelet[3832]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:21:30 embed-certs-672176 kubelet[3832]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:21:40 embed-certs-672176 kubelet[3832]: E1128 04:21:40.040149    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:21:51 embed-certs-672176 kubelet[3832]: E1128 04:21:51.038917    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	
	* 
	* ==> storage-provisioner [55999b180e46a76966250ba02a06f767d8185eb676c1d8bd4393a6ce89fa5cac] <==
	* I1128 04:12:46.708764       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 04:12:46.721336       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 04:12:46.721428       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 04:12:46.732630       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 04:12:46.733424       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-672176_dd91b5c6-ccfc-42f8-9afd-74c05f48e689!
	I1128 04:12:46.735780       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"374715d4-9bc6-4746-ae44-37fdb42dadbd", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-672176_dd91b5c6-ccfc-42f8-9afd-74c05f48e689 became leader
	I1128 04:12:46.834587       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-672176_dd91b5c6-ccfc-42f8-9afd-74c05f48e689!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-672176 -n embed-certs-672176
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-672176 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-ppnxv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-672176 describe pod metrics-server-57f55c9bc5-ppnxv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-672176 describe pod metrics-server-57f55c9bc5-ppnxv: exit status 1 (63.886747ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-ppnxv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-672176 describe pod metrics-server-57f55c9bc5-ppnxv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (169s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1128 04:14:18.807206  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 04:14:19.024961  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 04:15:10.258567  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 04:16:17.838838  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 04:16:23.484569  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 04:16:46.724969  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-666657 -n old-k8s-version-666657
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-28 04:16:50.528990451 +0000 UTC m=+5757.703964589
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-666657 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-666657 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.383µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-666657 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666657 -n old-k8s-version-666657
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-666657 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-666657 logs -n 25: (1.416524711s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p newest-cni-644411             | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-222348             | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-725962  | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-666657             | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-666657                              | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC | 28 Nov 23 04:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-644411                  | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-644411 --memory=2200 --alsologtostderr   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 03:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-222348                  | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-725962       | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-644411 sudo                              | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p                                                     | disable-driver-mounts-846967 | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | disable-driver-mounts-846967                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-672176            | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC | 28 Nov 23 03:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-672176                 | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC | 28 Nov 23 04:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:02:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:02:20.007599  388252 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:02:20.007767  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.007777  388252 out.go:309] Setting ErrFile to fd 2...
	I1128 04:02:20.007785  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.008096  388252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 04:02:20.008843  388252 out.go:303] Setting JSON to false
	I1128 04:02:20.010310  388252 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9890,"bootTime":1701134250,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 04:02:20.010407  388252 start.go:138] virtualization: kvm guest
	I1128 04:02:20.013087  388252 out.go:177] * [embed-certs-672176] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 04:02:20.014598  388252 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:02:20.014660  388252 notify.go:220] Checking for updates...
	I1128 04:02:20.015986  388252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:02:20.017211  388252 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:20.018519  388252 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 04:02:20.019955  388252 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 04:02:20.021210  388252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:02:20.023191  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:02:20.023899  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.023964  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.042617  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
	I1128 04:02:20.043095  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.043705  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.043736  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.044107  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.044324  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.044601  388252 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:02:20.044913  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.044954  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.060572  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I1128 04:02:20.061089  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.061641  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.061662  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.062005  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.062271  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.099905  388252 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 04:02:20.101319  388252 start.go:298] selected driver: kvm2
	I1128 04:02:20.101341  388252 start.go:902] validating driver "kvm2" against &{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.101493  388252 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:02:20.102582  388252 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.102689  388252 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 04:02:20.119550  388252 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 04:02:20.120061  388252 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 04:02:20.120161  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:02:20.120182  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:20.120200  388252 start_flags.go:323] config:
	{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.120453  388252 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.122000  388252 out.go:177] * Starting control plane node embed-certs-672176 in cluster embed-certs-672176
	I1128 04:02:20.123169  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:02:20.123226  388252 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 04:02:20.123238  388252 cache.go:56] Caching tarball of preloaded images
	I1128 04:02:20.123336  388252 preload.go:174] Found /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 04:02:20.123349  388252 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 04:02:20.123483  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:02:20.123764  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:02:20.123841  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 53.317µs
	I1128 04:02:20.123861  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:02:20.123898  388252 fix.go:54] fixHost starting: 
	I1128 04:02:20.124308  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.124355  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.139372  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I1128 04:02:20.139973  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.140502  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.140524  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.141047  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.141273  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.141507  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:02:20.143177  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Running err=<nil>
	W1128 04:02:20.143200  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:02:20.144930  388252 out.go:177] * Updating the running kvm2 "embed-certs-672176" VM ...
	I1128 04:02:17.125019  385277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:17.142364  385277 api_server.go:72] duration metric: took 4m14.849353437s to wait for apiserver process to appear ...
	I1128 04:02:17.142392  385277 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:17.142425  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:17.142480  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:17.183951  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:17.183975  385277 cri.go:89] found id: ""
	I1128 04:02:17.183984  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:17.184035  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.188897  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:17.188968  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:17.224077  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:17.224105  385277 cri.go:89] found id: ""
	I1128 04:02:17.224115  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:17.224171  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.228613  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:17.228693  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:17.263866  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:17.263895  385277 cri.go:89] found id: ""
	I1128 04:02:17.263906  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:17.263973  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.268122  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:17.268187  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:17.311145  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.311176  385277 cri.go:89] found id: ""
	I1128 04:02:17.311185  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:17.311245  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.315277  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:17.315355  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:17.352737  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:17.352763  385277 cri.go:89] found id: ""
	I1128 04:02:17.352773  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:17.352839  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.357033  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:17.357117  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:17.394844  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:17.394880  385277 cri.go:89] found id: ""
	I1128 04:02:17.394892  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:17.394949  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.399309  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:17.399382  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:17.441719  385277 cri.go:89] found id: ""
	I1128 04:02:17.441755  385277 logs.go:284] 0 containers: []
	W1128 04:02:17.441763  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:17.441769  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:17.441821  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:17.485353  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:17.485378  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:17.485383  385277 cri.go:89] found id: ""
	I1128 04:02:17.485391  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:17.485445  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.489781  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.493710  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:17.493734  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:17.552558  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:17.552596  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:17.570454  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:17.570484  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.617817  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:17.617855  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:18.071032  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:18.071076  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:18.188437  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:18.188477  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:18.246729  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:18.246777  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:18.287299  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:18.287345  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:18.324855  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:18.324903  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:18.378328  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:18.378370  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:18.421332  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:18.421375  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:18.467856  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:18.467905  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:18.528763  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:18.528817  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:19.035039  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:21.037085  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:23.535684  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:20.146477  388252 machine.go:88] provisioning docker machine ...
	I1128 04:02:20.146512  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.146758  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.146926  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:02:20.146949  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.147164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:02:20.150346  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.150885  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 04:58:10 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:02:20.150920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.151194  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:02:20.151404  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151768  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:02:20.151998  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:02:20.152482  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:02:20.152501  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:02:23.005224  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:21.087291  385277 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8444/healthz ...
	I1128 04:02:21.094451  385277 api_server.go:279] https://192.168.61.13:8444/healthz returned 200:
	ok
	I1128 04:02:21.096308  385277 api_server.go:141] control plane version: v1.28.4
	I1128 04:02:21.096333  385277 api_server.go:131] duration metric: took 3.953933505s to wait for apiserver health ...
	I1128 04:02:21.096343  385277 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:21.096371  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:21.096431  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:21.144869  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:21.144908  385277 cri.go:89] found id: ""
	I1128 04:02:21.144920  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:21.144987  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.149714  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:21.149790  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:21.192196  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.192230  385277 cri.go:89] found id: ""
	I1128 04:02:21.192242  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:21.192307  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.196964  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:21.197040  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:21.234749  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:21.234775  385277 cri.go:89] found id: ""
	I1128 04:02:21.234785  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:21.234845  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.239486  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:21.239574  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:21.275950  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.275980  385277 cri.go:89] found id: ""
	I1128 04:02:21.275991  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:21.276069  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.280518  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:21.280591  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:21.325941  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:21.325967  385277 cri.go:89] found id: ""
	I1128 04:02:21.325977  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:21.326038  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.330959  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:21.331031  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:21.376605  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.376636  385277 cri.go:89] found id: ""
	I1128 04:02:21.376648  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:21.376717  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.382609  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:21.382686  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:21.434065  385277 cri.go:89] found id: ""
	I1128 04:02:21.434102  385277 logs.go:284] 0 containers: []
	W1128 04:02:21.434113  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:21.434121  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:21.434191  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:21.475230  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.475265  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.475272  385277 cri.go:89] found id: ""
	I1128 04:02:21.475300  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:21.475367  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.479918  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.483989  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:21.484014  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.550040  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:21.550086  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.604802  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:21.604854  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:21.667187  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:21.667230  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:21.735542  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:21.735591  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.778554  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:21.778600  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.841737  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:21.841776  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.885454  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:21.885494  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:22.264498  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:22.264545  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:22.281694  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:22.281727  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:22.441500  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:22.441548  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:22.516971  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:22.517015  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:22.570642  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:22.570676  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:25.123556  385277 system_pods.go:59] 8 kube-system pods found
	I1128 04:02:25.123590  385277 system_pods.go:61] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.123595  385277 system_pods.go:61] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.123600  385277 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.123604  385277 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.123608  385277 system_pods.go:61] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.123613  385277 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.123620  385277 system_pods.go:61] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.123626  385277 system_pods.go:61] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.123633  385277 system_pods.go:74] duration metric: took 4.027284696s to wait for pod list to return data ...
	I1128 04:02:25.123641  385277 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:25.127575  385277 default_sa.go:45] found service account: "default"
	I1128 04:02:25.127601  385277 default_sa.go:55] duration metric: took 3.954108ms for default service account to be created ...
	I1128 04:02:25.127611  385277 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:25.136183  385277 system_pods.go:86] 8 kube-system pods found
	I1128 04:02:25.136217  385277 system_pods.go:89] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.136224  385277 system_pods.go:89] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.136232  385277 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.136240  385277 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.136246  385277 system_pods.go:89] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.136253  385277 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.136266  385277 system_pods.go:89] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.136280  385277 system_pods.go:89] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.136291  385277 system_pods.go:126] duration metric: took 8.673655ms to wait for k8s-apps to be running ...
	I1128 04:02:25.136303  385277 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:25.136362  385277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:25.158811  385277 system_svc.go:56] duration metric: took 22.495299ms WaitForService to wait for kubelet.
	I1128 04:02:25.158862  385277 kubeadm.go:581] duration metric: took 4m22.865858856s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:25.158891  385277 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:25.162679  385277 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:25.162706  385277 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:25.162717  385277 node_conditions.go:105] duration metric: took 3.821419ms to run NodePressure ...
	I1128 04:02:25.162745  385277 start.go:228] waiting for startup goroutines ...
	I1128 04:02:25.162751  385277 start.go:233] waiting for cluster config update ...
	I1128 04:02:25.162760  385277 start.go:242] writing updated cluster config ...
	I1128 04:02:25.163075  385277 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:25.217545  385277 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:02:25.219820  385277 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-725962" cluster and "default" namespace by default
	I1128 04:02:28.624093  385190 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.0
	I1128 04:02:28.624173  385190 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:02:28.624301  385190 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:02:28.624444  385190 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:02:28.624561  385190 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:02:28.624641  385190 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:02:28.626365  385190 out.go:204]   - Generating certificates and keys ...
	I1128 04:02:28.626465  385190 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:02:28.626548  385190 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:02:28.626645  385190 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:02:28.626719  385190 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:02:28.626828  385190 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:02:28.626908  385190 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:02:28.626985  385190 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:02:28.627057  385190 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:02:28.627166  385190 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:02:28.627259  385190 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:02:28.627315  385190 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:02:28.627384  385190 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:02:28.627442  385190 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:02:28.627513  385190 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1128 04:02:28.627573  385190 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:02:28.627653  385190 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:02:28.627717  385190 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:02:28.627821  385190 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:02:28.627901  385190 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:02:28.629387  385190 out.go:204]   - Booting up control plane ...
	I1128 04:02:28.629496  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:02:28.629593  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:02:28.629701  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:02:28.629825  385190 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:02:28.629933  385190 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:02:28.629985  385190 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:02:28.630182  385190 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:02:28.630292  385190 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502940 seconds
	I1128 04:02:28.630437  385190 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:02:28.630586  385190 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:02:28.630656  385190 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:02:28.630869  385190 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-222348 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:02:28.630937  385190 kubeadm.go:322] [bootstrap-token] Using token: 7e8qc3.nnytwd8q8fl84l6i
	I1128 04:02:28.632838  385190 out.go:204]   - Configuring RBAC rules ...
	I1128 04:02:28.632987  385190 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:02:28.633108  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:02:28.633273  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:02:28.633455  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:02:28.633635  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:02:28.633737  385190 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:02:28.633909  385190 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:02:28.633964  385190 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:02:28.634003  385190 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:02:28.634009  385190 kubeadm.go:322] 
	I1128 04:02:28.634063  385190 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:02:28.634070  385190 kubeadm.go:322] 
	I1128 04:02:28.634130  385190 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:02:28.634136  385190 kubeadm.go:322] 
	I1128 04:02:28.634157  385190 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:02:28.634205  385190 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:02:28.634250  385190 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:02:28.634256  385190 kubeadm.go:322] 
	I1128 04:02:28.634333  385190 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:02:28.634349  385190 kubeadm.go:322] 
	I1128 04:02:28.634438  385190 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:02:28.634462  385190 kubeadm.go:322] 
	I1128 04:02:28.634525  385190 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:02:28.634659  385190 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:02:28.634759  385190 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:02:28.634773  385190 kubeadm.go:322] 
	I1128 04:02:28.634879  385190 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:02:28.634957  385190 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:02:28.634965  385190 kubeadm.go:322] 
	I1128 04:02:28.635041  385190 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635153  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:02:28.635188  385190 kubeadm.go:322] 	--control-plane 
	I1128 04:02:28.635197  385190 kubeadm.go:322] 
	I1128 04:02:28.635304  385190 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:02:28.635313  385190 kubeadm.go:322] 
	I1128 04:02:28.635411  385190 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635541  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:02:28.635574  385190 cni.go:84] Creating CNI manager for ""
	I1128 04:02:28.635588  385190 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:28.637435  385190 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:02:28.638928  385190 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:02:25.536491  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:28.037478  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:26.077199  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:28.654704  385190 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:02:28.714435  385190 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:02:28.714516  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.714524  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=no-preload-222348 minikube.k8s.io/updated_at=2023_11_28T04_02_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.790761  385190 ops.go:34] apiserver oom_adj: -16
	I1128 04:02:28.965788  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.082351  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.680586  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.181037  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.680560  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.181252  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.680411  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.180401  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.681195  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:33.180867  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.535026  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.536808  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.161184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:33.680538  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.180615  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.680359  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.180746  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.681099  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.180588  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.681059  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.180397  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.680629  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:38.180710  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.036694  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:37.535611  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:35.229145  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:38.681268  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.180491  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.680634  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.180761  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.681057  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.180983  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.309439  385190 kubeadm.go:1081] duration metric: took 12.594981015s to wait for elevateKubeSystemPrivileges.
	I1128 04:02:41.309479  385190 kubeadm.go:406] StartCluster complete in 5m13.943228432s
	I1128 04:02:41.309503  385190 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.309588  385190 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:41.311897  385190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.312215  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:02:41.312322  385190 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:02:41.312407  385190 addons.go:69] Setting storage-provisioner=true in profile "no-preload-222348"
	I1128 04:02:41.312422  385190 addons.go:69] Setting default-storageclass=true in profile "no-preload-222348"
	I1128 04:02:41.312436  385190 addons.go:231] Setting addon storage-provisioner=true in "no-preload-222348"
	I1128 04:02:41.312438  385190 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-222348"
	W1128 04:02:41.312445  385190 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:02:41.312446  385190 addons.go:69] Setting metrics-server=true in profile "no-preload-222348"
	I1128 04:02:41.312462  385190 config.go:182] Loaded profile config "no-preload-222348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 04:02:41.312475  385190 addons.go:231] Setting addon metrics-server=true in "no-preload-222348"
	W1128 04:02:41.312485  385190 addons.go:240] addon metrics-server should already be in state true
	I1128 04:02:41.312510  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312537  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312960  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312985  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.328695  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I1128 04:02:41.328709  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44013
	I1128 04:02:41.328795  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I1128 04:02:41.332632  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332652  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332640  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.333191  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333213  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333323  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333340  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333358  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333344  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333610  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333774  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333826  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.334168  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334182  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.334399  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.334587  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334602  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.338095  385190 addons.go:231] Setting addon default-storageclass=true in "no-preload-222348"
	W1128 04:02:41.338117  385190 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:02:41.338150  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.338562  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.338582  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.351757  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I1128 04:02:41.352462  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.353001  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.353018  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.353432  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.353689  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.354246  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1128 04:02:41.354837  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.355324  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.355342  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.355772  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.356535  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.356577  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.356832  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33321
	I1128 04:02:41.357390  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.357499  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.359297  385190 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:02:41.357865  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.360511  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.360704  385190 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.360715  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:02:41.360729  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.361075  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.361268  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.363830  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.365783  385190 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:02:41.364607  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.365384  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.367315  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:02:41.367328  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:02:41.367348  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.367398  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.367414  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.367426  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.368068  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.368272  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.370196  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370716  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.370740  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370820  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.371038  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.371144  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.371280  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.374445  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1128 04:02:41.374734  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.375079  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.375089  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.375305  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.375403  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.376672  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.376916  385190 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.376931  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:02:41.376944  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.379448  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379800  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.379839  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379946  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.380070  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.380154  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.380223  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.388696  385190 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-222348" context rescaled to 1 replicas
	I1128 04:02:41.388733  385190 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:02:41.390613  385190 out.go:177] * Verifying Kubernetes components...
	I1128 04:02:41.391975  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:41.644941  385190 node_ready.go:35] waiting up to 6m0s for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.645100  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:02:41.665031  385190 node_ready.go:49] node "no-preload-222348" has status "Ready":"True"
	I1128 04:02:41.665067  385190 node_ready.go:38] duration metric: took 20.088639ms waiting for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.665082  385190 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:41.682673  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:41.759560  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:02:41.759595  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:02:41.905887  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.922496  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.955296  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:02:41.955331  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:02:42.013986  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.014023  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:02:42.104936  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.373507  385190 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 04:02:43.023075  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117131952s)
	I1128 04:02:43.023099  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.100573063s)
	I1128 04:02:43.023137  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023153  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023217  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023235  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023471  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023491  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023502  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023510  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023615  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023659  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023682  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023704  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023724  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023704  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023898  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023917  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.116124  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.116162  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.116591  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.116636  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.116648  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.309617  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.204630924s)
	I1128 04:02:43.309676  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.309689  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310010  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310031  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310043  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.310051  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310313  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310331  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310343  385190 addons.go:467] Verifying addon metrics-server=true in "no-preload-222348"
	I1128 04:02:43.312005  385190 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 04:02:43.313519  385190 addons.go:502] enable addons completed in 2.001198411s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 04:02:39.536572  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:42.036107  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:41.309196  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:44.385117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:43.735794  385190 pod_ready.go:102] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:45.228427  385190 pod_ready.go:92] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.228457  385190 pod_ready.go:81] duration metric: took 3.545740844s waiting for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.228470  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234714  385190 pod_ready.go:92] pod "coredns-76f75df574-nxnkf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.234747  385190 pod_ready.go:81] duration metric: took 6.268663ms waiting for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234767  385190 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240363  385190 pod_ready.go:92] pod "etcd-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.240386  385190 pod_ready.go:81] duration metric: took 5.606452ms waiting for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240397  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245748  385190 pod_ready.go:92] pod "kube-apiserver-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.245774  385190 pod_ready.go:81] duration metric: took 5.367922ms waiting for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245786  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251475  385190 pod_ready.go:92] pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.251498  385190 pod_ready.go:81] duration metric: took 5.703821ms waiting for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251506  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050247  385190 pod_ready.go:92] pod "kube-proxy-2cf7h" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.050276  385190 pod_ready.go:81] duration metric: took 798.763018ms waiting for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050285  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448834  385190 pod_ready.go:92] pod "kube-scheduler-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.448860  385190 pod_ready.go:81] duration metric: took 398.568611ms waiting for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448867  385190 pod_ready.go:38] duration metric: took 4.783773086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:46.448903  385190 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:02:46.448956  385190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:46.462941  385190 api_server.go:72] duration metric: took 5.074163925s to wait for apiserver process to appear ...
	I1128 04:02:46.463051  385190 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:46.463074  385190 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I1128 04:02:46.467657  385190 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I1128 04:02:46.468866  385190 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 04:02:46.468903  385190 api_server.go:131] duration metric: took 5.843376ms to wait for apiserver health ...
	I1128 04:02:46.468913  385190 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:46.655554  385190 system_pods.go:59] 9 kube-system pods found
	I1128 04:02:46.655587  385190 system_pods.go:61] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:46.655591  385190 system_pods.go:61] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:46.655595  385190 system_pods.go:61] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:46.655600  385190 system_pods.go:61] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:46.655605  385190 system_pods.go:61] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:46.655608  385190 system_pods.go:61] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:46.655612  385190 system_pods.go:61] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:46.655619  385190 system_pods.go:61] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:46.655623  385190 system_pods.go:61] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:46.655631  385190 system_pods.go:74] duration metric: took 186.709524ms to wait for pod list to return data ...
	I1128 04:02:46.655640  385190 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:46.849175  385190 default_sa.go:45] found service account: "default"
	I1128 04:02:46.849211  385190 default_sa.go:55] duration metric: took 193.561736ms for default service account to be created ...
	I1128 04:02:46.849224  385190 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:47.053165  385190 system_pods.go:86] 9 kube-system pods found
	I1128 04:02:47.053196  385190 system_pods.go:89] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:47.053202  385190 system_pods.go:89] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:47.053206  385190 system_pods.go:89] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:47.053210  385190 system_pods.go:89] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:47.053215  385190 system_pods.go:89] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:47.053219  385190 system_pods.go:89] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:47.053223  385190 system_pods.go:89] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:47.053230  385190 system_pods.go:89] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:47.053234  385190 system_pods.go:89] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:47.053244  385190 system_pods.go:126] duration metric: took 204.014035ms to wait for k8s-apps to be running ...
	I1128 04:02:47.053258  385190 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:47.053305  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:47.067411  385190 system_svc.go:56] duration metric: took 14.14274ms WaitForService to wait for kubelet.
	I1128 04:02:47.067436  385190 kubeadm.go:581] duration metric: took 5.678670521s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:47.067453  385190 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:47.249281  385190 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:47.249314  385190 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:47.249327  385190 node_conditions.go:105] duration metric: took 181.869484ms to run NodePressure ...
	I1128 04:02:47.249343  385190 start.go:228] waiting for startup goroutines ...
	I1128 04:02:47.249351  385190 start.go:233] waiting for cluster config update ...
	I1128 04:02:47.249363  385190 start.go:242] writing updated cluster config ...
	I1128 04:02:47.249683  385190 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:47.301859  385190 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.0 (minor skew: 1)
	I1128 04:02:47.304215  385190 out.go:177] * Done! kubectl is now configured to use "no-preload-222348" cluster and "default" namespace by default
	I1128 04:02:44.036258  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:46.535320  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:49.035723  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:51.036414  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.538606  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.501130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:56.038018  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:58.038148  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:56.573082  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:00.535454  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.536429  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.657139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:05.035677  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:07.535352  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:05.725166  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:10.035343  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:11.229133  384793 pod_ready.go:81] duration metric: took 4m0.000747713s waiting for pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:11.229186  384793 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 04:03:11.229223  384793 pod_ready.go:38] duration metric: took 4m1.198355321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:11.229295  384793 kubeadm.go:640] restartCluster took 5m7.227749733s
	W1128 04:03:11.229381  384793 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 04:03:11.229418  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 04:03:11.809110  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:14.877214  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:17.718633  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.489183339s)
	I1128 04:03:17.718715  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:17.739229  384793 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:03:17.757193  384793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:03:17.767831  384793 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:03:17.767891  384793 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1128 04:03:17.992007  384793 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:03:20.961191  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:24.033147  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:31.044187  384793 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1128 04:03:31.044276  384793 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:03:31.044375  384793 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:03:31.044493  384793 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:03:31.044609  384793 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:03:31.044732  384793 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:03:31.044843  384793 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:03:31.044947  384793 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1128 04:03:31.045000  384793 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:03:31.046699  384793 out.go:204]   - Generating certificates and keys ...
	I1128 04:03:31.046809  384793 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:03:31.046903  384793 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:03:31.047016  384793 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:03:31.047101  384793 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:03:31.047160  384793 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:03:31.047208  384793 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:03:31.047264  384793 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:03:31.047314  384793 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:03:31.047377  384793 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:03:31.047482  384793 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:03:31.047529  384793 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:03:31.047578  384793 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:03:31.047620  384793 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:03:31.047694  384793 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:03:31.047788  384793 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:03:31.047884  384793 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:03:31.047988  384793 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:03:31.049345  384793 out.go:204]   - Booting up control plane ...
	I1128 04:03:31.049473  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:03:31.049569  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:03:31.049662  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:03:31.049788  384793 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:03:31.049994  384793 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:03:31.050107  384793 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503287 seconds
	I1128 04:03:31.050234  384793 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:03:31.050420  384793 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:03:31.050527  384793 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:03:31.050654  384793 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-666657 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1128 04:03:31.050713  384793 kubeadm.go:322] [bootstrap-token] Using token: gf7r1p.pbcguwte29lkqg9w
	I1128 04:03:31.052000  384793 out.go:204]   - Configuring RBAC rules ...
	I1128 04:03:31.052092  384793 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:03:31.052210  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:03:31.052320  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:03:31.052413  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:03:31.052483  384793 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:03:31.052536  384793 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:03:31.052597  384793 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:03:31.052606  384793 kubeadm.go:322] 
	I1128 04:03:31.052674  384793 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:03:31.052686  384793 kubeadm.go:322] 
	I1128 04:03:31.052781  384793 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:03:31.052797  384793 kubeadm.go:322] 
	I1128 04:03:31.052818  384793 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:03:31.052928  384793 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:03:31.052973  384793 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:03:31.052982  384793 kubeadm.go:322] 
	I1128 04:03:31.053023  384793 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:03:31.053088  384793 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:03:31.053143  384793 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:03:31.053150  384793 kubeadm.go:322] 
	I1128 04:03:31.053220  384793 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1128 04:03:31.053286  384793 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:03:31.053292  384793 kubeadm.go:322] 
	I1128 04:03:31.053381  384793 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053534  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:03:31.053573  384793 kubeadm.go:322]     --control-plane 	  
	I1128 04:03:31.053582  384793 kubeadm.go:322] 
	I1128 04:03:31.053693  384793 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:03:31.053705  384793 kubeadm.go:322] 
	I1128 04:03:31.053806  384793 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053946  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:03:31.053966  384793 cni.go:84] Creating CNI manager for ""
	I1128 04:03:31.053976  384793 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:03:31.055505  384793 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:03:31.057142  384793 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:03:31.079411  384793 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:03:31.115893  384793 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:03:31.115971  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.115980  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=old-k8s-version-666657 minikube.k8s.io/updated_at=2023_11_28T04_03_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.155887  384793 ops.go:34] apiserver oom_adj: -16
	I1128 04:03:31.372659  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.491129  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.099198  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.598840  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.099309  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.599526  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:30.109176  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:33.181170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:34.099192  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:34.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.098837  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.599080  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.098595  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.599209  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.099078  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.599225  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.099115  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.599148  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.261149  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:39.099036  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.599363  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.099099  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.598700  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.099170  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.599370  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.099044  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.098743  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.599233  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.333168  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:44.099079  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:44.598797  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.098959  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.598648  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.098995  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.301569  384793 kubeadm.go:1081] duration metric: took 15.185662789s to wait for elevateKubeSystemPrivileges.
	I1128 04:03:46.301619  384793 kubeadm.go:406] StartCluster complete in 5m42.369662329s
	I1128 04:03:46.301646  384793 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.301755  384793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:03:46.304463  384793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.304778  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:03:46.304778  384793 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:03:46.304867  384793 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304898  384793 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304910  384793 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-666657"
	I1128 04:03:46.304911  384793 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-666657"
	W1128 04:03:46.304920  384793 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:03:46.304927  384793 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-666657"
	W1128 04:03:46.304935  384793 addons.go:240] addon metrics-server should already be in state true
	I1128 04:03:46.304934  384793 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-666657"
	I1128 04:03:46.304987  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.304988  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.305001  384793 config.go:182] Loaded profile config "old-k8s-version-666657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 04:03:46.305394  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305427  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305454  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305429  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305395  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305694  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.322961  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33891
	I1128 04:03:46.322979  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I1128 04:03:46.323376  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323388  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323820  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I1128 04:03:46.323904  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.323916  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324071  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324086  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324273  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.324410  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324528  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324590  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.324704  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324711  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.325059  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.325278  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325304  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.325499  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325519  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.328349  384793 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-666657"
	W1128 04:03:46.328365  384793 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:03:46.328393  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.328731  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.328750  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.342280  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45973
	I1128 04:03:46.343025  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.343737  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.343759  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.344269  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.344492  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.345036  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39033
	I1128 04:03:46.345665  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.346273  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.346301  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.346384  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.348493  384793 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:03:46.346866  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.349948  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:03:46.349966  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:03:46.349989  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.350099  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.352330  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.352432  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36429
	I1128 04:03:46.354071  384793 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:03:46.352959  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.354459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355328  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.355358  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355480  384793 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.355501  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:03:46.355518  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.355216  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.355803  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.356414  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.356435  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.356917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.357018  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.357108  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.357738  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.357769  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.358467  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.358922  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.358946  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.359072  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.359282  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.359403  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.359610  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.373628  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1128 04:03:46.374105  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.374866  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.374895  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.375314  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.375548  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.377265  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.377561  384793 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.377582  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:03:46.377603  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.380459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.380834  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.380864  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.381016  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.381169  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.381359  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.381466  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.409792  384793 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-666657" context rescaled to 1 replicas
	I1128 04:03:46.409842  384793 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:03:46.411454  384793 out.go:177] * Verifying Kubernetes components...
	I1128 04:03:46.413194  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:46.586767  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.631269  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.634383  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:03:46.634407  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:03:46.666152  384793 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.666176  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:03:46.674225  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:03:46.674248  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:03:46.713431  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:46.713461  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:03:46.793657  384793 node_ready.go:49] node "old-k8s-version-666657" has status "Ready":"True"
	I1128 04:03:46.793685  384793 node_ready.go:38] duration metric: took 127.497314ms waiting for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.793695  384793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:46.793699  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:47.263395  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:47.404099  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404139  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404445  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404485  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.404487  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:47.404506  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404519  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404786  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404809  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434537  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.434567  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.434929  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.434986  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434965  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.447368  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.816042626s)
	I1128 04:03:48.447386  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.781104735s)
	I1128 04:03:48.447415  384793 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1128 04:03:48.447423  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.447803  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.447818  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.447828  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447836  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.448143  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.448144  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.448166  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.746828  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.953085214s)
	I1128 04:03:48.746898  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.746917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747352  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.747378  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.747396  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.747420  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.747437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747692  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.749007  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.749027  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.749045  384793 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-666657"
	I1128 04:03:48.750820  384793 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 04:03:48.417150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:48.752378  384793 addons.go:502] enable addons completed in 2.447603022s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 04:03:49.504435  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.973968  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.485111  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:53.973462  384793 pod_ready.go:92] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.973491  384793 pod_ready.go:81] duration metric: took 6.710064476s waiting for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.973504  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.975383  384793 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975413  384793 pod_ready.go:81] duration metric: took 1.901164ms waiting for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:53.975426  384793 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975437  384793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980213  384793 pod_ready.go:92] pod "kube-proxy-fpjnf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.980239  384793 pod_ready.go:81] duration metric: took 4.79365ms waiting for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980249  384793 pod_ready.go:38] duration metric: took 7.186544585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:53.980270  384793 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:03:53.980322  384793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:03:53.995392  384793 api_server.go:72] duration metric: took 7.585507425s to wait for apiserver process to appear ...
	I1128 04:03:53.995438  384793 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:03:53.995455  384793 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I1128 04:03:54.002840  384793 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I1128 04:03:54.003953  384793 api_server.go:141] control plane version: v1.16.0
	I1128 04:03:54.003972  384793 api_server.go:131] duration metric: took 8.527968ms to wait for apiserver health ...
	I1128 04:03:54.003980  384793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:03:54.008155  384793 system_pods.go:59] 4 kube-system pods found
	I1128 04:03:54.008179  384793 system_pods.go:61] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.008184  384793 system_pods.go:61] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.008192  384793 system_pods.go:61] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.008196  384793 system_pods.go:61] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.008202  384793 system_pods.go:74] duration metric: took 4.21636ms to wait for pod list to return data ...
	I1128 04:03:54.008209  384793 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:03:54.010577  384793 default_sa.go:45] found service account: "default"
	I1128 04:03:54.010597  384793 default_sa.go:55] duration metric: took 2.383201ms for default service account to be created ...
	I1128 04:03:54.010603  384793 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:03:54.014085  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.014107  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.014114  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.014121  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.014125  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.014142  384793 retry.go:31] will retry after 305.81254ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.325645  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.325690  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.325700  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.325711  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.325717  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.325737  384793 retry.go:31] will retry after 265.004483ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.596427  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.596465  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.596472  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.596483  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.596491  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.596515  384793 retry.go:31] will retry after 379.763313ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.981569  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.981599  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.981607  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.981617  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.981624  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.981646  384793 retry.go:31] will retry after 439.396023ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.426531  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.426560  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.426565  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.426572  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.426577  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.426593  384793 retry.go:31] will retry after 551.563469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.983013  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.983042  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.983048  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.983055  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.983060  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.983076  384793 retry.go:31] will retry after 647.414701ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:56.635207  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:56.635238  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:56.635243  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:56.635251  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:56.635256  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:56.635276  384793 retry.go:31] will retry after 1.037316769s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.678748  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:57.678791  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:57.678800  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:57.678810  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:57.678815  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:57.678836  384793 retry.go:31] will retry after 1.167348672s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.565155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:58.851584  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:58.851615  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:58.851621  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:58.851627  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:58.851632  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:58.851649  384793 retry.go:31] will retry after 1.37796567s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.235244  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:00.235270  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:00.235276  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:00.235282  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:00.235288  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:00.235313  384793 retry.go:31] will retry after 2.090359712s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:02.330947  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:02.330984  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:02.331002  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:02.331013  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:02.331020  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:02.331041  384793 retry.go:31] will retry after 2.451255186s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.637193  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:04.787969  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:04.787999  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:04.788004  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:04.788011  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:04.788016  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:04.788033  384793 retry.go:31] will retry after 2.859833817s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:07.653629  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:07.653661  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:07.653667  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:07.653674  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:07.653679  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:07.653697  384793 retry.go:31] will retry after 4.226694897s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:06.721130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:09.789162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:11.886456  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:11.886488  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:11.886496  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:11.886503  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:11.886508  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:11.886538  384793 retry.go:31] will retry after 4.177038986s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:16.069291  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:16.069324  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:16.069330  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:16.069336  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:16.069341  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:16.069359  384793 retry.go:31] will retry after 4.273733761s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:15.869195  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:18.945228  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:20.347960  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:20.347992  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:20.347998  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:20.348004  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:20.348009  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:20.348028  384793 retry.go:31] will retry after 6.790786839s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:27.147442  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:27.147481  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:27.147489  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:27.147496  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Pending
	I1128 04:04:27.147506  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:27.147513  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:27.147532  384793 retry.go:31] will retry after 7.530763623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:25.021154  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:28.093157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.177177  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.684745  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:34.684783  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:34.684792  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:34.684799  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:34.684807  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:34.684813  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:34.684835  384793 retry.go:31] will retry after 10.243202989s: missing components: etcd, kube-apiserver, kube-controller-manager
	I1128 04:04:37.245170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:43.325131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:44.935423  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:04:44.935456  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:44.935462  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:04:44.935469  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Pending
	I1128 04:04:44.935474  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Pending
	I1128 04:04:44.935480  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:44.935486  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:44.935493  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:44.935498  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:44.935517  384793 retry.go:31] will retry after 15.895769684s: missing components: kube-apiserver, kube-controller-manager
	I1128 04:04:46.397235  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:52.481117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:55.549226  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:00.839171  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:05:00.839203  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:05:00.839209  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:05:00.839213  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Running
	I1128 04:05:00.839217  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Running
	I1128 04:05:00.839221  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:05:00.839225  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:05:00.839231  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:05:00.839236  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:05:00.839245  384793 system_pods.go:126] duration metric: took 1m6.828635432s to wait for k8s-apps to be running ...
	I1128 04:05:00.839253  384793 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:05:00.839308  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:05:00.858602  384793 system_svc.go:56] duration metric: took 19.336447ms WaitForService to wait for kubelet.
	I1128 04:05:00.858640  384793 kubeadm.go:581] duration metric: took 1m14.448764188s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:05:00.858663  384793 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:05:00.862657  384793 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:05:00.862682  384793 node_conditions.go:123] node cpu capacity is 2
	I1128 04:05:00.862695  384793 node_conditions.go:105] duration metric: took 4.026622ms to run NodePressure ...
	I1128 04:05:00.862709  384793 start.go:228] waiting for startup goroutines ...
	I1128 04:05:00.862721  384793 start.go:233] waiting for cluster config update ...
	I1128 04:05:00.862736  384793 start.go:242] writing updated cluster config ...
	I1128 04:05:00.863037  384793 ssh_runner.go:195] Run: rm -f paused
	I1128 04:05:00.914674  384793 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1128 04:05:00.916795  384793 out.go:177] 
	W1128 04:05:00.918292  384793 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1128 04:05:00.919711  384793 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1128 04:05:00.921263  384793 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-666657" cluster and "default" namespace by default
	I1128 04:05:01.629125  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:04.701205  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:10.781216  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:13.853213  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:19.933127  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:23.005456  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:29.085157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:32.161103  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:38.237107  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:41.313150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:47.389244  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:50.461131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:56.541162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:59.613200  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:05.693144  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:08.765184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:14.845161  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:17.921139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:23.997190  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:27.069225  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:33.149188  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:36.221163  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:42.301167  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:45.373156  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:51.453155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:54.525189  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:57.526358  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:06:57.526408  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:06:57.528448  388252 machine.go:91] provisioned docker machine in 4m37.381939051s
	I1128 04:06:57.528492  388252 fix.go:56] fixHost completed within 4m37.404595738s
	I1128 04:06:57.528498  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 4m37.404645524s
	W1128 04:06:57.528514  388252 start.go:691] error starting host: provision: host is not running
	W1128 04:06:57.528751  388252 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1128 04:06:57.528762  388252 start.go:706] Will try again in 5 seconds ...
	I1128 04:07:02.528995  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:07:02.529144  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 79.815µs
	I1128 04:07:02.529172  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:07:02.529180  388252 fix.go:54] fixHost starting: 
	I1128 04:07:02.529654  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:07:02.529689  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:07:02.545443  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I1128 04:07:02.546041  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:07:02.546627  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:07:02.546657  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:07:02.547002  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:07:02.547202  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:02.547393  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:07:02.549209  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Stopped err=<nil>
	I1128 04:07:02.549234  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	W1128 04:07:02.549378  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:07:02.551250  388252 out.go:177] * Restarting existing kvm2 VM for "embed-certs-672176" ...
	I1128 04:07:02.552611  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Start
	I1128 04:07:02.552792  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring networks are active...
	I1128 04:07:02.553615  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network default is active
	I1128 04:07:02.553928  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network mk-embed-certs-672176 is active
	I1128 04:07:02.554371  388252 main.go:141] libmachine: (embed-certs-672176) Getting domain xml...
	I1128 04:07:02.555218  388252 main.go:141] libmachine: (embed-certs-672176) Creating domain...
	I1128 04:07:03.867073  388252 main.go:141] libmachine: (embed-certs-672176) Waiting to get IP...
	I1128 04:07:03.868115  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:03.868595  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:03.868706  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:03.868567  389161 retry.go:31] will retry after 306.367802ms: waiting for machine to come up
	I1128 04:07:04.176148  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.176727  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.176760  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.176665  389161 retry.go:31] will retry after 349.820346ms: waiting for machine to come up
	I1128 04:07:04.528319  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.528804  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.528830  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.528753  389161 retry.go:31] will retry after 434.816613ms: waiting for machine to come up
	I1128 04:07:04.965453  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.965931  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.965964  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.965859  389161 retry.go:31] will retry after 504.812349ms: waiting for machine to come up
	I1128 04:07:05.472644  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.473150  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.473181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.473089  389161 retry.go:31] will retry after 512.859795ms: waiting for machine to come up
	I1128 04:07:05.987622  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.988077  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.988101  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.988023  389161 retry.go:31] will retry after 578.673806ms: waiting for machine to come up
	I1128 04:07:06.568420  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:06.568923  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:06.568957  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:06.568863  389161 retry.go:31] will retry after 1.101477644s: waiting for machine to come up
	I1128 04:07:07.671698  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:07.672126  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:07.672156  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:07.672054  389161 retry.go:31] will retry after 1.379684082s: waiting for machine to come up
	I1128 04:07:09.053227  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:09.053918  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:09.053950  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:09.053851  389161 retry.go:31] will retry after 1.775284772s: waiting for machine to come up
	I1128 04:07:10.831571  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:10.832140  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:10.832177  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:10.832065  389161 retry.go:31] will retry after 2.005203426s: waiting for machine to come up
	I1128 04:07:12.838667  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:12.839159  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:12.839187  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:12.839113  389161 retry.go:31] will retry after 2.403192486s: waiting for machine to come up
	I1128 04:07:15.244005  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:15.244513  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:15.244553  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:15.244427  389161 retry.go:31] will retry after 2.329820043s: waiting for machine to come up
	I1128 04:07:17.576268  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:17.576707  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:17.576748  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:17.576652  389161 retry.go:31] will retry after 4.220303586s: waiting for machine to come up
	I1128 04:07:21.801976  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802441  388252 main.go:141] libmachine: (embed-certs-672176) Found IP for machine: 192.168.72.208
	I1128 04:07:21.802469  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has current primary IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802483  388252 main.go:141] libmachine: (embed-certs-672176) Reserving static IP address...
	I1128 04:07:21.802890  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.802920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | skip adding static IP to network mk-embed-certs-672176 - found existing host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"}
	I1128 04:07:21.802939  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Getting to WaitForSSH function...
	I1128 04:07:21.802955  388252 main.go:141] libmachine: (embed-certs-672176) Reserved static IP address: 192.168.72.208
	I1128 04:07:21.802967  388252 main.go:141] libmachine: (embed-certs-672176) Waiting for SSH to be available...
	I1128 04:07:21.805675  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806052  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.806086  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806212  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH client type: external
	I1128 04:07:21.806237  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH private key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa (-rw-------)
	I1128 04:07:21.806261  388252 main.go:141] libmachine: (embed-certs-672176) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 04:07:21.806272  388252 main.go:141] libmachine: (embed-certs-672176) DBG | About to run SSH command:
	I1128 04:07:21.806284  388252 main.go:141] libmachine: (embed-certs-672176) DBG | exit 0
	I1128 04:07:21.897047  388252 main.go:141] libmachine: (embed-certs-672176) DBG | SSH cmd err, output: <nil>: 
	I1128 04:07:21.897443  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetConfigRaw
	I1128 04:07:21.898164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:21.901014  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901421  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.901454  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901679  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:07:21.901872  388252 machine.go:88] provisioning docker machine ...
	I1128 04:07:21.901891  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:21.902121  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902304  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:07:21.902318  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902482  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:21.905282  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905757  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.905798  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905977  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:21.906187  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906383  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906565  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:21.906734  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:21.907224  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:21.907254  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:07:22.042525  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-672176
	
	I1128 04:07:22.042553  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.045516  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.045916  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.045961  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.046143  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.046353  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046526  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046676  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.046861  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.047186  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.047207  388252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-672176' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-672176/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-672176' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:07:22.179515  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:07:22.179552  388252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 04:07:22.179578  388252 buildroot.go:174] setting up certificates
	I1128 04:07:22.179591  388252 provision.go:83] configureAuth start
	I1128 04:07:22.179602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:22.179940  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:22.182782  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183167  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.183199  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183344  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.185770  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186158  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.186195  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186348  388252 provision.go:138] copyHostCerts
	I1128 04:07:22.186407  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 04:07:22.186418  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 04:07:22.186494  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 04:07:22.186609  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 04:07:22.186623  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 04:07:22.186658  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 04:07:22.186756  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 04:07:22.186772  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 04:07:22.186830  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 04:07:22.186915  388252 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.embed-certs-672176 san=[192.168.72.208 192.168.72.208 localhost 127.0.0.1 minikube embed-certs-672176]
	I1128 04:07:22.268178  388252 provision.go:172] copyRemoteCerts
	I1128 04:07:22.268250  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:07:22.268305  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.270816  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271152  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.271181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271382  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.271571  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.271730  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.271880  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.362340  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 04:07:22.387591  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 04:07:22.412169  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 04:07:22.437185  388252 provision.go:86] duration metric: configureAuth took 257.574597ms
	I1128 04:07:22.437223  388252 buildroot.go:189] setting minikube options for container-runtime
	I1128 04:07:22.437418  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:07:22.437496  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.440503  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.440937  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.440984  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.441148  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.441414  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441626  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441808  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.442043  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.442369  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.442386  388252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:07:22.778314  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:07:22.778344  388252 machine.go:91] provisioned docker machine in 876.457785ms
	I1128 04:07:22.778392  388252 start.go:300] post-start starting for "embed-certs-672176" (driver="kvm2")
	I1128 04:07:22.778413  388252 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:07:22.778463  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:22.778894  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:07:22.778934  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.781750  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782161  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.782203  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782336  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.782653  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.782870  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.783045  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.876530  388252 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:07:22.881442  388252 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 04:07:22.881472  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 04:07:22.881541  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 04:07:22.881618  388252 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 04:07:22.881701  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:07:22.891393  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:22.914734  388252 start.go:303] post-start completed in 136.316733ms
	I1128 04:07:22.914771  388252 fix.go:56] fixHost completed within 20.385588986s
	I1128 04:07:22.914800  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.917856  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918267  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.918301  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918449  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.918697  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.918898  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.919051  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.919230  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.919548  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.919561  388252 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 04:07:23.037790  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701144442.982632661
	
	I1128 04:07:23.037817  388252 fix.go:206] guest clock: 1701144442.982632661
	I1128 04:07:23.037828  388252 fix.go:219] Guest: 2023-11-28 04:07:22.982632661 +0000 UTC Remote: 2023-11-28 04:07:22.914776935 +0000 UTC m=+302.972189005 (delta=67.855726ms)
	I1128 04:07:23.037853  388252 fix.go:190] guest clock delta is within tolerance: 67.855726ms
	I1128 04:07:23.037860  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 20.508701455s
	I1128 04:07:23.037879  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.038196  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:23.040928  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041276  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.041309  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041473  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042009  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042217  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042315  388252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:07:23.042380  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.042447  388252 ssh_runner.go:195] Run: cat /version.json
	I1128 04:07:23.042479  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.045070  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045430  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.045459  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045478  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045634  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.045826  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.045987  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.045998  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.046020  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.046131  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.046197  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.046338  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.046455  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.046594  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.158653  388252 ssh_runner.go:195] Run: systemctl --version
	I1128 04:07:23.164496  388252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:07:23.313946  388252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 04:07:23.320220  388252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 04:07:23.320326  388252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:07:23.339262  388252 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 04:07:23.339296  388252 start.go:472] detecting cgroup driver to use...
	I1128 04:07:23.339401  388252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:07:23.352989  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:07:23.367735  388252 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:07:23.367797  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:07:23.382143  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:07:23.395983  388252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 04:07:23.513475  388252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:07:23.657449  388252 docker.go:219] disabling docker service ...
	I1128 04:07:23.657531  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:07:23.672662  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:07:23.685142  388252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:07:23.810404  388252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:07:23.929413  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:07:23.942971  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:07:23.961419  388252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 04:07:23.961493  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.971562  388252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 04:07:23.971643  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.981660  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.992472  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:24.002748  388252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 04:07:24.016234  388252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 04:07:24.025560  388252 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 04:07:24.025629  388252 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 04:07:24.039085  388252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 04:07:24.048324  388252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 04:07:24.160507  388252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 04:07:24.331205  388252 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 04:07:24.331292  388252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 04:07:24.336480  388252 start.go:540] Will wait 60s for crictl version
	I1128 04:07:24.336541  388252 ssh_runner.go:195] Run: which crictl
	I1128 04:07:24.341052  388252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 04:07:24.376784  388252 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 04:07:24.376910  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.425035  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.485230  388252 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 04:07:24.486822  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:24.490127  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490529  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:24.490558  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490733  388252 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1128 04:07:24.494881  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:24.510006  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:07:24.510097  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:24.549615  388252 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 04:07:24.549699  388252 ssh_runner.go:195] Run: which lz4
	I1128 04:07:24.554039  388252 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 04:07:24.558068  388252 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 04:07:24.558101  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 04:07:26.358503  388252 crio.go:444] Took 1.804493 seconds to copy over tarball
	I1128 04:07:26.358586  388252 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 04:07:29.679041  388252 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.320417818s)
	I1128 04:07:29.679072  388252 crio.go:451] Took 3.320535 seconds to extract the tarball
	I1128 04:07:29.679086  388252 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 04:07:29.723905  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:29.774544  388252 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 04:07:29.774574  388252 cache_images.go:84] Images are preloaded, skipping loading
	I1128 04:07:29.774683  388252 ssh_runner.go:195] Run: crio config
	I1128 04:07:29.841740  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:29.841767  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:29.841792  388252 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 04:07:29.841826  388252 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-672176 NodeName:embed-certs-672176 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 04:07:29.842004  388252 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-672176"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 04:07:29.842115  388252 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-672176 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 04:07:29.842184  388252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 04:07:29.854017  388252 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 04:07:29.854103  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 04:07:29.863871  388252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1128 04:07:29.880656  388252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 04:07:29.899138  388252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1128 04:07:29.919697  388252 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I1128 04:07:29.924087  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:29.936814  388252 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176 for IP: 192.168.72.208
	I1128 04:07:29.936851  388252 certs.go:190] acquiring lock for shared ca certs: {Name:mk57c0483467fb0022a439f1b546194ca653d1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:07:29.937053  388252 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key
	I1128 04:07:29.937097  388252 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key
	I1128 04:07:29.937198  388252 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/client.key
	I1128 04:07:29.937274  388252 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key.9e96c9f0
	I1128 04:07:29.937334  388252 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key
	I1128 04:07:29.937491  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem (1338 bytes)
	W1128 04:07:29.937524  388252 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515_empty.pem, impossibly tiny 0 bytes
	I1128 04:07:29.937535  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 04:07:29.937561  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem (1078 bytes)
	I1128 04:07:29.937586  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem (1123 bytes)
	I1128 04:07:29.937607  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem (1675 bytes)
	I1128 04:07:29.937698  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:29.938553  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 04:07:29.963444  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 04:07:29.988035  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 04:07:30.012981  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 04:07:30.219926  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 04:07:30.244077  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 04:07:30.268833  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 04:07:30.293921  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 04:07:30.322839  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /usr/share/ca-certificates/3405152.pem (1708 bytes)
	I1128 04:07:30.349783  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 04:07:30.374569  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem --> /usr/share/ca-certificates/340515.pem (1338 bytes)
	I1128 04:07:30.401804  388252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 04:07:30.420925  388252 ssh_runner.go:195] Run: openssl version
	I1128 04:07:30.427193  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3405152.pem && ln -fs /usr/share/ca-certificates/3405152.pem /etc/ssl/certs/3405152.pem"
	I1128 04:07:30.439369  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444359  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444455  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.451032  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3405152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 04:07:30.464110  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 04:07:30.477275  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483239  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483314  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.489884  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 04:07:30.501967  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340515.pem && ln -fs /usr/share/ca-certificates/340515.pem /etc/ssl/certs/340515.pem"
	I1128 04:07:30.514081  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519079  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519157  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.525194  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340515.pem /etc/ssl/certs/51391683.0"
	I1128 04:07:30.536594  388252 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 04:07:30.541041  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 04:07:30.547008  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 04:07:30.554317  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 04:07:30.561063  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 04:07:30.567355  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 04:07:30.573719  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 04:07:30.580010  388252 kubeadm.go:404] StartCluster: {Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:07:30.580166  388252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 04:07:30.580237  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:30.623908  388252 cri.go:89] found id: ""
	I1128 04:07:30.623980  388252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 04:07:30.635847  388252 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 04:07:30.635911  388252 kubeadm.go:636] restartCluster start
	I1128 04:07:30.635982  388252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 04:07:30.646523  388252 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.647648  388252 kubeconfig.go:92] found "embed-certs-672176" server: "https://192.168.72.208:8443"
	I1128 04:07:30.650037  388252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 04:07:30.660625  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.660703  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.674234  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.674258  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.674309  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.687276  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.188012  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.188122  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.201481  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.688057  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.688152  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.701564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.188188  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.201049  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.688113  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.688191  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.700824  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.187399  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.187517  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.200128  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.687562  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.687688  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.700564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.188276  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.188406  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.201686  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.688327  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.688426  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.701023  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.187672  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.187809  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.200598  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.688485  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.688565  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.701518  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.188131  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.188213  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.201708  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.688321  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.688430  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.701852  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.187395  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.187539  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.200267  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.688365  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.688447  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.701921  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.187456  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.187615  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.201388  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.687819  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.687933  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.700584  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.188195  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.201557  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.688192  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.688268  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.700990  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.187806  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:40.187918  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:40.201110  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.660853  388252 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 04:07:40.660908  388252 kubeadm.go:1128] stopping kube-system containers ...
	I1128 04:07:40.660926  388252 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 04:07:40.661008  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:40.706945  388252 cri.go:89] found id: ""
	I1128 04:07:40.707017  388252 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 04:07:40.724988  388252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:07:40.735077  388252 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:07:40.735165  388252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745110  388252 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745146  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:40.870777  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:41.851187  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.047008  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.129329  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.194986  388252 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:07:42.195074  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.210225  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.727622  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.227063  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.726928  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.227709  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.727790  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.756952  388252 api_server.go:72] duration metric: took 2.561964065s to wait for apiserver process to appear ...
	I1128 04:07:44.756989  388252 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:07:44.757011  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.757778  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:44.757838  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.758268  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:45.258785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.416741  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.416771  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.416785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.484252  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.484292  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.758607  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.765159  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:49.765189  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.258770  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.264464  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.264499  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.759164  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.765206  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.765246  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:51.258591  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:51.264758  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I1128 04:07:51.274077  388252 api_server.go:141] control plane version: v1.28.4
	I1128 04:07:51.274110  388252 api_server.go:131] duration metric: took 6.517112692s to wait for apiserver health ...
	I1128 04:07:51.274122  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:51.274130  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:51.276088  388252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:07:51.277582  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:07:51.302050  388252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:07:51.355400  388252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:07:51.371543  388252 system_pods.go:59] 8 kube-system pods found
	I1128 04:07:51.371592  388252 system_pods.go:61] "coredns-5dd5756b68-296l9" [a79e060e-b757-46b9-882e-5f065aed0f46] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 04:07:51.371605  388252 system_pods.go:61] "etcd-embed-certs-672176" [610938df-5b75-4fef-b632-19af73d74dab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 04:07:51.371623  388252 system_pods.go:61] "kube-apiserver-embed-certs-672176" [3e513b84-29f4-4285-aea3-963078fa9e74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 04:07:51.371633  388252 system_pods.go:61] "kube-controller-manager-embed-certs-672176" [6fb9a912-0c05-47d1-8420-26d0bbbe92c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 04:07:51.371640  388252 system_pods.go:61] "kube-proxy-4cvwh" [9882c0aa-5c66-4b53-8c8e-827c1cddaac5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 04:07:51.371652  388252 system_pods.go:61] "kube-scheduler-embed-certs-672176" [2d7c706d-f01b-4e80-ba35-8ef97f27faa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 04:07:51.371659  388252 system_pods.go:61] "metrics-server-57f55c9bc5-sbkpc" [ea558db5-2aab-4e1e-aa62-a4595172d108] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:07:51.371666  388252 system_pods.go:61] "storage-provisioner" [96737dd7-931e-4ac5-b662-c560a4b6642e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 04:07:51.371676  388252 system_pods.go:74] duration metric: took 16.247766ms to wait for pod list to return data ...
	I1128 04:07:51.371694  388252 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:07:51.376458  388252 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:07:51.376495  388252 node_conditions.go:123] node cpu capacity is 2
	I1128 04:07:51.376508  388252 node_conditions.go:105] duration metric: took 4.80925ms to run NodePressure ...
	I1128 04:07:51.376539  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:51.778110  388252 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 04:07:51.786916  388252 kubeadm.go:787] kubelet initialised
	I1128 04:07:51.787002  388252 kubeadm.go:788] duration metric: took 8.859672ms waiting for restarted kubelet to initialise ...
	I1128 04:07:51.787019  388252 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:07:51.799380  388252 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.807214  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807261  388252 pod_ready.go:81] duration metric: took 7.829357ms waiting for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.807274  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807299  388252 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.814516  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814550  388252 pod_ready.go:81] duration metric: took 7.235029ms waiting for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.814569  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814576  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.827729  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827759  388252 pod_ready.go:81] duration metric: took 13.172422ms waiting for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.827768  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827774  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:54.190842  388252 pod_ready.go:102] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:07:56.189656  388252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.189758  388252 pod_ready.go:81] duration metric: took 4.36196703s waiting for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.189779  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196462  388252 pod_ready.go:92] pod "kube-proxy-4cvwh" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.196503  388252 pod_ready.go:81] duration metric: took 6.707028ms waiting for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196517  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:58.590819  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:00.590953  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:02.595296  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:04.592801  388252 pod_ready.go:92] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:08:04.592826  388252 pod_ready.go:81] duration metric: took 8.396301174s waiting for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:04.592839  388252 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:06.618794  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:08.619204  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:11.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:13.618160  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:15.619404  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:17.620107  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:20.118789  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:22.119626  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:24.619088  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:26.619353  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:29.118548  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:31.118625  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:33.122964  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:35.620077  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:38.118800  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:40.618996  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:42.619252  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:45.118801  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:47.118987  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:49.619233  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:52.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:54.120044  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:56.619768  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:59.119321  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:01.119784  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:03.619289  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:06.119695  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:08.618767  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:10.620952  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:13.119086  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:15.121912  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:17.618200  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:19.619428  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:22.117316  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:24.118147  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:26.119945  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:28.619687  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:30.619772  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:33.118414  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:35.622173  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:38.118091  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:40.118723  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:42.119551  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:44.119931  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:46.619572  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:48.620898  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:51.118343  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:53.619215  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:56.119440  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:58.620299  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:01.118313  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:03.618615  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:05.619056  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:07.622475  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:10.117858  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:12.119468  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:14.619203  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:16.619540  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:19.118749  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:21.619618  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:23.620623  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:26.118183  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:28.118246  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:30.618282  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:33.117841  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:35.122904  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:37.619116  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:40.118304  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:42.618264  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:44.621653  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:47.119733  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:49.618284  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:51.619099  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:54.118728  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:56.121041  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:58.618237  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:00.619430  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:03.119263  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:05.619558  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:07.620571  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:10.117924  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:12.118001  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:14.119916  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:16.618621  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:18.620149  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:21.118296  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:23.118614  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:25.119100  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:27.120549  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:29.618264  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:32.119075  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:34.619939  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:37.119561  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:39.119896  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:41.617842  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:43.618594  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:45.618757  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:47.619342  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:49.623012  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:52.119438  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:54.121760  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:56.620252  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:59.120191  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:01.618305  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:03.619616  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:04.593067  388252 pod_ready.go:81] duration metric: took 4m0.000190987s waiting for pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace to be "Ready" ...
	E1128 04:12:04.593121  388252 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 04:12:04.593139  388252 pod_ready.go:38] duration metric: took 4m12.806107308s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:04.593168  388252 kubeadm.go:640] restartCluster took 4m33.957247441s
	W1128 04:12:04.593251  388252 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 04:12:04.593282  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 04:12:18.614553  388252 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.021224516s)
	I1128 04:12:18.614653  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:18.628836  388252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:12:18.640242  388252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:12:18.649879  388252 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:12:18.649930  388252 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 04:12:18.702438  388252 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 04:12:18.702606  388252 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:12:18.867279  388252 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:12:18.867400  388252 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:12:18.867534  388252 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:12:19.120397  388252 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:12:19.122246  388252 out.go:204]   - Generating certificates and keys ...
	I1128 04:12:19.122357  388252 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:12:19.122474  388252 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:12:19.122646  388252 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:12:19.122757  388252 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:12:19.122856  388252 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:12:19.122934  388252 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:12:19.123028  388252 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:12:19.123173  388252 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:12:19.123270  388252 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:12:19.123380  388252 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:12:19.123435  388252 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:12:19.123517  388252 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:12:19.397687  388252 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:12:19.545433  388252 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:12:19.753655  388252 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:12:19.867889  388252 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:12:19.868510  388252 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:12:19.873288  388252 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:12:19.875099  388252 out.go:204]   - Booting up control plane ...
	I1128 04:12:19.875243  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:12:19.875362  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:12:19.875447  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:12:19.890902  388252 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:12:19.891790  388252 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:12:19.891903  388252 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:12:20.033327  388252 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:12:28.539450  388252 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.505311 seconds
	I1128 04:12:28.539554  388252 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:12:28.556290  388252 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:12:29.115246  388252 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:12:29.115517  388252 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-672176 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:12:29.632584  388252 kubeadm.go:322] [bootstrap-token] Using token: fhdku8.6c57fpjso9w7rrxv
	I1128 04:12:29.634185  388252 out.go:204]   - Configuring RBAC rules ...
	I1128 04:12:29.634320  388252 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:12:29.640994  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:12:29.653566  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:12:29.660519  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:12:29.665018  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:12:29.677514  388252 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:12:29.691421  388252 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:12:29.939496  388252 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:12:30.049393  388252 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:12:30.049425  388252 kubeadm.go:322] 
	I1128 04:12:30.049538  388252 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:12:30.049559  388252 kubeadm.go:322] 
	I1128 04:12:30.049652  388252 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:12:30.049683  388252 kubeadm.go:322] 
	I1128 04:12:30.049721  388252 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:12:30.049806  388252 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:12:30.049876  388252 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:12:30.049884  388252 kubeadm.go:322] 
	I1128 04:12:30.049983  388252 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:12:30.050004  388252 kubeadm.go:322] 
	I1128 04:12:30.050076  388252 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:12:30.050088  388252 kubeadm.go:322] 
	I1128 04:12:30.050145  388252 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:12:30.050234  388252 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:12:30.050337  388252 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:12:30.050347  388252 kubeadm.go:322] 
	I1128 04:12:30.050444  388252 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:12:30.050532  388252 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:12:30.050539  388252 kubeadm.go:322] 
	I1128 04:12:30.050633  388252 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fhdku8.6c57fpjso9w7rrxv \
	I1128 04:12:30.050753  388252 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:12:30.050784  388252 kubeadm.go:322] 	--control-plane 
	I1128 04:12:30.050790  388252 kubeadm.go:322] 
	I1128 04:12:30.050888  388252 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:12:30.050898  388252 kubeadm.go:322] 
	I1128 04:12:30.050994  388252 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fhdku8.6c57fpjso9w7rrxv \
	I1128 04:12:30.051118  388252 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:12:30.051556  388252 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:12:30.051597  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:12:30.051611  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:12:30.053491  388252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:12:30.055147  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:12:30.088905  388252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:12:30.132297  388252 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:12:30.132365  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=embed-certs-672176 minikube.k8s.io/updated_at=2023_11_28T04_12_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.132370  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.459401  388252 ops.go:34] apiserver oom_adj: -16
	I1128 04:12:30.459555  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.568049  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:31.166991  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:31.666953  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:32.167174  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:32.666615  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:33.166464  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:33.667438  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:34.167422  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:34.666474  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:35.167309  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:35.667310  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:36.166896  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:36.667030  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:37.167265  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:37.667172  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:38.166893  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:38.667196  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:39.166889  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:39.667205  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:40.167112  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:40.667377  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:41.167422  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:41.666650  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:42.167425  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:42.308007  388252 kubeadm.go:1081] duration metric: took 12.175710221s to wait for elevateKubeSystemPrivileges.
	I1128 04:12:42.308051  388252 kubeadm.go:406] StartCluster complete in 5m11.728054603s
	I1128 04:12:42.308070  388252 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:12:42.308149  388252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:12:42.310104  388252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:12:42.310352  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:12:42.310440  388252 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:12:42.310557  388252 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-672176"
	I1128 04:12:42.310581  388252 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-672176"
	W1128 04:12:42.310588  388252 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:12:42.310601  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:12:42.310668  388252 addons.go:69] Setting default-storageclass=true in profile "embed-certs-672176"
	I1128 04:12:42.310684  388252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-672176"
	I1128 04:12:42.310698  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.311002  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311040  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.311081  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311113  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.311110  388252 addons.go:69] Setting metrics-server=true in profile "embed-certs-672176"
	I1128 04:12:42.311127  388252 addons.go:231] Setting addon metrics-server=true in "embed-certs-672176"
	W1128 04:12:42.311134  388252 addons.go:240] addon metrics-server should already be in state true
	I1128 04:12:42.311167  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.311539  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311584  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.328327  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I1128 04:12:42.328769  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.329061  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I1128 04:12:42.329541  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.329720  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.329731  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.329740  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1128 04:12:42.330179  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.330195  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.330193  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.330557  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.330572  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.330768  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.331035  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.331050  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.331073  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.331151  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.331476  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.332248  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.332359  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.334824  388252 addons.go:231] Setting addon default-storageclass=true in "embed-certs-672176"
	W1128 04:12:42.334849  388252 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:12:42.334882  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.335253  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.335333  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.352633  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40133
	I1128 04:12:42.353356  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.353736  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37797
	I1128 04:12:42.353967  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.353982  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.354364  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.354559  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.355670  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37125
	I1128 04:12:42.355716  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.356215  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.356764  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.356808  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.356772  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.356965  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.356984  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.359122  388252 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:12:42.357414  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.357431  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.360619  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:12:42.360666  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:12:42.360695  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.360632  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.360981  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.361031  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.362951  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.365190  388252 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:12:42.364654  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.365222  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.365254  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.365285  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.365431  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.367020  388252 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:12:42.367079  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:12:42.367146  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.367154  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.367365  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.370570  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.371152  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.371177  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.371181  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.371352  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.371712  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.371881  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.381549  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I1128 04:12:42.382167  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.382667  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.382726  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.383173  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.383387  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.384921  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.385265  388252 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:12:42.385284  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:12:42.385305  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.388576  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.389134  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.389197  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.389203  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.389439  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.389617  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.389783  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.513762  388252 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-672176" context rescaled to 1 replicas
	I1128 04:12:42.513815  388252 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:12:42.515768  388252 out.go:177] * Verifying Kubernetes components...
	I1128 04:12:42.517584  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:42.565623  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:12:42.565648  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:12:42.583220  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:12:42.591345  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:12:42.596578  388252 node_ready.go:35] waiting up to 6m0s for node "embed-certs-672176" to be "Ready" ...
	I1128 04:12:42.596679  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:12:42.615808  388252 node_ready.go:49] node "embed-certs-672176" has status "Ready":"True"
	I1128 04:12:42.615836  388252 node_ready.go:38] duration metric: took 19.228862ms waiting for node "embed-certs-672176" to be "Ready" ...
	I1128 04:12:42.615848  388252 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:42.637885  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:12:42.637913  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:12:42.667328  388252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:42.863842  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:12:42.863897  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:12:42.947911  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:12:44.507109  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.923846344s)
	I1128 04:12:44.507207  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.507227  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.507634  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.507655  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.507667  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.507677  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.509371  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:44.509455  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.509479  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.585867  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.585899  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.586220  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.586243  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.586371  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:44.829833  388252 pod_ready.go:102] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:45.125413  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.534026387s)
	I1128 04:12:45.125477  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.125492  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.125490  388252 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.528780545s)
	I1128 04:12:45.125516  388252 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1128 04:12:45.125839  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.125859  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.125874  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.125883  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.126171  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.126184  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.126201  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.429252  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.481263549s)
	I1128 04:12:45.429311  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.429327  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.429703  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.429772  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.429787  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.429797  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.429727  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.430078  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.430119  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.430135  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.430149  388252 addons.go:467] Verifying addon metrics-server=true in "embed-certs-672176"
	I1128 04:12:45.432135  388252 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 04:12:45.433222  388252 addons.go:502] enable addons completed in 3.122792003s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 04:12:46.830144  388252 pod_ready.go:102] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:47.831025  388252 pod_ready.go:92] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.831057  388252 pod_ready.go:81] duration metric: took 5.163697448s waiting for pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.831067  388252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.837379  388252 pod_ready.go:92] pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.837400  388252 pod_ready.go:81] duration metric: took 6.325699ms waiting for pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.837411  388252 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.842711  388252 pod_ready.go:92] pod "etcd-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.842736  388252 pod_ready.go:81] duration metric: took 5.316988ms waiting for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.842744  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.848771  388252 pod_ready.go:92] pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.848792  388252 pod_ready.go:81] duration metric: took 6.042201ms waiting for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.848801  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.854704  388252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.854729  388252 pod_ready.go:81] duration metric: took 5.922154ms waiting for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.854737  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7srf" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.227290  388252 pod_ready.go:92] pod "kube-proxy-q7srf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:48.227318  388252 pod_ready.go:81] duration metric: took 372.573682ms waiting for pod "kube-proxy-q7srf" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.227331  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.627054  388252 pod_ready.go:92] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:48.627088  388252 pod_ready.go:81] duration metric: took 399.749681ms waiting for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.627097  388252 pod_ready.go:38] duration metric: took 6.011238284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:48.627114  388252 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:12:48.627164  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:12:48.645283  388252 api_server.go:72] duration metric: took 6.131420029s to wait for apiserver process to appear ...
	I1128 04:12:48.645317  388252 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:12:48.645345  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:12:48.651616  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I1128 04:12:48.653231  388252 api_server.go:141] control plane version: v1.28.4
	I1128 04:12:48.653252  388252 api_server.go:131] duration metric: took 7.92759ms to wait for apiserver health ...
	I1128 04:12:48.653262  388252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:12:48.831400  388252 system_pods.go:59] 9 kube-system pods found
	I1128 04:12:48.831430  388252 system_pods.go:61] "coredns-5dd5756b68-48xtx" [1229f57f-a420-4c63-ae05-8a051f556bbd] Running
	I1128 04:12:48.831435  388252 system_pods.go:61] "coredns-5dd5756b68-qws7p" [19e86a95-23a4-4222-955d-9c560db64c80] Running
	I1128 04:12:48.831439  388252 system_pods.go:61] "etcd-embed-certs-672176" [6591bb2b-2d10-4f8b-9d1a-919b39590717] Running
	I1128 04:12:48.831443  388252 system_pods.go:61] "kube-apiserver-embed-certs-672176" [0ddbb8ba-804f-43ef-a803-62570732f165] Running
	I1128 04:12:48.831447  388252 system_pods.go:61] "kube-controller-manager-embed-certs-672176" [8dcb6ffa-1e26-420f-b385-e145cf24282a] Running
	I1128 04:12:48.831451  388252 system_pods.go:61] "kube-proxy-q7srf" [a2390c61-7f2a-40ac-ad4c-c47e78a3eb90] Running
	I1128 04:12:48.831454  388252 system_pods.go:61] "kube-scheduler-embed-certs-672176" [973e06dd-2716-40fe-99ed-cf7844cd22b7] Running
	I1128 04:12:48.831461  388252 system_pods.go:61] "metrics-server-57f55c9bc5-ppnxv" [1c86fe3d-4460-4777-a7d7-57b1f6aad5f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:12:48.831466  388252 system_pods.go:61] "storage-provisioner" [3304cb38-897a-482f-9a9d-9e287aca2ce4] Running
	I1128 04:12:48.831473  388252 system_pods.go:74] duration metric: took 178.206375ms to wait for pod list to return data ...
	I1128 04:12:48.831481  388252 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:12:49.027724  388252 default_sa.go:45] found service account: "default"
	I1128 04:12:49.027754  388252 default_sa.go:55] duration metric: took 196.266769ms for default service account to be created ...
	I1128 04:12:49.027762  388252 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:12:49.231633  388252 system_pods.go:86] 9 kube-system pods found
	I1128 04:12:49.231663  388252 system_pods.go:89] "coredns-5dd5756b68-48xtx" [1229f57f-a420-4c63-ae05-8a051f556bbd] Running
	I1128 04:12:49.231669  388252 system_pods.go:89] "coredns-5dd5756b68-qws7p" [19e86a95-23a4-4222-955d-9c560db64c80] Running
	I1128 04:12:49.231673  388252 system_pods.go:89] "etcd-embed-certs-672176" [6591bb2b-2d10-4f8b-9d1a-919b39590717] Running
	I1128 04:12:49.231677  388252 system_pods.go:89] "kube-apiserver-embed-certs-672176" [0ddbb8ba-804f-43ef-a803-62570732f165] Running
	I1128 04:12:49.231682  388252 system_pods.go:89] "kube-controller-manager-embed-certs-672176" [8dcb6ffa-1e26-420f-b385-e145cf24282a] Running
	I1128 04:12:49.231687  388252 system_pods.go:89] "kube-proxy-q7srf" [a2390c61-7f2a-40ac-ad4c-c47e78a3eb90] Running
	I1128 04:12:49.231691  388252 system_pods.go:89] "kube-scheduler-embed-certs-672176" [973e06dd-2716-40fe-99ed-cf7844cd22b7] Running
	I1128 04:12:49.231697  388252 system_pods.go:89] "metrics-server-57f55c9bc5-ppnxv" [1c86fe3d-4460-4777-a7d7-57b1f6aad5f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:12:49.231702  388252 system_pods.go:89] "storage-provisioner" [3304cb38-897a-482f-9a9d-9e287aca2ce4] Running
	I1128 04:12:49.231712  388252 system_pods.go:126] duration metric: took 203.944338ms to wait for k8s-apps to be running ...
	I1128 04:12:49.231724  388252 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:12:49.231781  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:49.247634  388252 system_svc.go:56] duration metric: took 15.898994ms WaitForService to wait for kubelet.
	I1128 04:12:49.247662  388252 kubeadm.go:581] duration metric: took 6.733807391s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:12:49.247681  388252 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:12:49.426882  388252 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:12:49.426916  388252 node_conditions.go:123] node cpu capacity is 2
	I1128 04:12:49.426931  388252 node_conditions.go:105] duration metric: took 179.246183ms to run NodePressure ...
	I1128 04:12:49.426946  388252 start.go:228] waiting for startup goroutines ...
	I1128 04:12:49.426954  388252 start.go:233] waiting for cluster config update ...
	I1128 04:12:49.426965  388252 start.go:242] writing updated cluster config ...
	I1128 04:12:49.427242  388252 ssh_runner.go:195] Run: rm -f paused
	I1128 04:12:49.477142  388252 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:12:49.479448  388252 out.go:177] * Done! kubectl is now configured to use "embed-certs-672176" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 03:57:45 UTC, ends at Tue 2023-11-28 04:16:51 UTC. --
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.378204868Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145011378192054,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=163491d3-bb10-4bf5-b3af-a3135c226587 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.378760289Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=41d48052-e55d-42d1-8049-570d79f2a590 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.378843213Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=41d48052-e55d-42d1-8049-570d79f2a590 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.379100558Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ecbe1433454e572685cd5dc66e924030a471daa9cc12657a01ee105e3400bfb4,PodSandboxId:fa01086d74baa278002b4dec633701f19824de6c6610d907804f6d53f45b8e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144229479578775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed59bc28-66f5-44f8-9ff5-d5be69e0049a,},Annotations:map[string]string{io.kubernetes.container.hash: 8e76b141,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a36dd35c0d679bc4feae2b55f722fb9e5d94222ccdfc64f0534bbded07a159,PodSandboxId:b1d5aa0a1633946ba10561b5f4b9861d92fe511d6e02d29bf4070797edb47cf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701144228819892268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpjnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ef95f3-b9bc-4936-a2e7-398191b6bed5,},Annotations:map[string]string{io.kubernetes.container.hash: c6c5f81f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf61f1f828a44c23c4e5f82409576bf12884717baaec81b789ae3f719e5fec20,PodSandboxId:a7bec5579a274c5d95675941872fa6da07dc3b739bd82cf2f2481c34572f66d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701144227950257258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-529cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07d1ac-6461-451e-a1bf-4a5493d7d453,},Annotations:map[string]string{io.kubernetes.container.hash: d8dad01b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:108096398e4416dcf3dea8ffc0057a1399647e61d177533c5ddf4b01ae3b4ed3,PodSandboxId:fc9ca2bef594fb9c1142d04fbcd8bbdd0d73cddf51be4f963827638d789c6ce2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701144201610291676,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35cc95d33d1e82251c247e4c3039876,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f6164f8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731933b8d59f91b64564a03132ad4f64897116ded4b3ce17c719c8f3d315fb0a,PodSandboxId:2b639245676bdbaba2d743d027ab2c10c93fa3fc7e7e253cd4d83441e758f2e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701144200561582080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fba9d2d49ee66313b5cc9a6e11f4cc83069cb4e66b9f45340c6a05df4ea1ef2,PodSandboxId:4406cf30e5698951c86d82a2d13e97a26ed67affd0738799478173ca906394ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701144200605866435,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb7bc23ae3bb9f8c73ebeaeb030df1c2c98b27acdf5ffb7c293a8d16cdc386d0,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701144199840191427,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92a27c0ce26421760d35ed955d182ad61fa04f534a73d5900d9d04d95af39a4,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701143896787387489,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=41d48052-e55d-42d1-8049-570d79f2a590 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.426246254Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bf7f9927-087c-4a84-a930-49f57c1e32b1 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.426302430Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bf7f9927-087c-4a84-a930-49f57c1e32b1 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.427765248Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3705a43c-9380-4af7-8e06-71928f9359a1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.428207167Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145011428190417,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=3705a43c-9380-4af7-8e06-71928f9359a1 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.428756590Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=47d84e5b-0e90-4ffd-b888-9591c6ab2ba1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.428821007Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=47d84e5b-0e90-4ffd-b888-9591c6ab2ba1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.428988507Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ecbe1433454e572685cd5dc66e924030a471daa9cc12657a01ee105e3400bfb4,PodSandboxId:fa01086d74baa278002b4dec633701f19824de6c6610d907804f6d53f45b8e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144229479578775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed59bc28-66f5-44f8-9ff5-d5be69e0049a,},Annotations:map[string]string{io.kubernetes.container.hash: 8e76b141,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a36dd35c0d679bc4feae2b55f722fb9e5d94222ccdfc64f0534bbded07a159,PodSandboxId:b1d5aa0a1633946ba10561b5f4b9861d92fe511d6e02d29bf4070797edb47cf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701144228819892268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpjnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ef95f3-b9bc-4936-a2e7-398191b6bed5,},Annotations:map[string]string{io.kubernetes.container.hash: c6c5f81f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf61f1f828a44c23c4e5f82409576bf12884717baaec81b789ae3f719e5fec20,PodSandboxId:a7bec5579a274c5d95675941872fa6da07dc3b739bd82cf2f2481c34572f66d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701144227950257258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-529cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07d1ac-6461-451e-a1bf-4a5493d7d453,},Annotations:map[string]string{io.kubernetes.container.hash: d8dad01b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:108096398e4416dcf3dea8ffc0057a1399647e61d177533c5ddf4b01ae3b4ed3,PodSandboxId:fc9ca2bef594fb9c1142d04fbcd8bbdd0d73cddf51be4f963827638d789c6ce2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701144201610291676,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35cc95d33d1e82251c247e4c3039876,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f6164f8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731933b8d59f91b64564a03132ad4f64897116ded4b3ce17c719c8f3d315fb0a,PodSandboxId:2b639245676bdbaba2d743d027ab2c10c93fa3fc7e7e253cd4d83441e758f2e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701144200561582080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fba9d2d49ee66313b5cc9a6e11f4cc83069cb4e66b9f45340c6a05df4ea1ef2,PodSandboxId:4406cf30e5698951c86d82a2d13e97a26ed67affd0738799478173ca906394ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701144200605866435,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb7bc23ae3bb9f8c73ebeaeb030df1c2c98b27acdf5ffb7c293a8d16cdc386d0,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701144199840191427,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92a27c0ce26421760d35ed955d182ad61fa04f534a73d5900d9d04d95af39a4,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701143896787387489,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=47d84e5b-0e90-4ffd-b888-9591c6ab2ba1 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.469319394Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d692f2ed-a1cd-44d2-80be-9ee6f43c0ddc name=/runtime.v1.RuntimeService/Version
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.469420222Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d692f2ed-a1cd-44d2-80be-9ee6f43c0ddc name=/runtime.v1.RuntimeService/Version
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.470803569Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=546b58e3-4a3a-417c-ab1a-6680f0f4b87c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.471179789Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145011471166603,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=546b58e3-4a3a-417c-ab1a-6680f0f4b87c name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.471663212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d3fb74bd-8b2c-4230-b5e1-af85324fc918 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.471793892Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d3fb74bd-8b2c-4230-b5e1-af85324fc918 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.471985771Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ecbe1433454e572685cd5dc66e924030a471daa9cc12657a01ee105e3400bfb4,PodSandboxId:fa01086d74baa278002b4dec633701f19824de6c6610d907804f6d53f45b8e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144229479578775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed59bc28-66f5-44f8-9ff5-d5be69e0049a,},Annotations:map[string]string{io.kubernetes.container.hash: 8e76b141,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a36dd35c0d679bc4feae2b55f722fb9e5d94222ccdfc64f0534bbded07a159,PodSandboxId:b1d5aa0a1633946ba10561b5f4b9861d92fe511d6e02d29bf4070797edb47cf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701144228819892268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpjnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ef95f3-b9bc-4936-a2e7-398191b6bed5,},Annotations:map[string]string{io.kubernetes.container.hash: c6c5f81f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf61f1f828a44c23c4e5f82409576bf12884717baaec81b789ae3f719e5fec20,PodSandboxId:a7bec5579a274c5d95675941872fa6da07dc3b739bd82cf2f2481c34572f66d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701144227950257258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-529cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07d1ac-6461-451e-a1bf-4a5493d7d453,},Annotations:map[string]string{io.kubernetes.container.hash: d8dad01b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:108096398e4416dcf3dea8ffc0057a1399647e61d177533c5ddf4b01ae3b4ed3,PodSandboxId:fc9ca2bef594fb9c1142d04fbcd8bbdd0d73cddf51be4f963827638d789c6ce2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701144201610291676,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35cc95d33d1e82251c247e4c3039876,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f6164f8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731933b8d59f91b64564a03132ad4f64897116ded4b3ce17c719c8f3d315fb0a,PodSandboxId:2b639245676bdbaba2d743d027ab2c10c93fa3fc7e7e253cd4d83441e758f2e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701144200561582080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fba9d2d49ee66313b5cc9a6e11f4cc83069cb4e66b9f45340c6a05df4ea1ef2,PodSandboxId:4406cf30e5698951c86d82a2d13e97a26ed67affd0738799478173ca906394ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701144200605866435,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb7bc23ae3bb9f8c73ebeaeb030df1c2c98b27acdf5ffb7c293a8d16cdc386d0,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701144199840191427,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92a27c0ce26421760d35ed955d182ad61fa04f534a73d5900d9d04d95af39a4,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701143896787387489,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d3fb74bd-8b2c-4230-b5e1-af85324fc918 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.518358254Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e0fde2e8-2977-41f2-bef1-6475186596c3 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.518438232Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e0fde2e8-2977-41f2-bef1-6475186596c3 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.521175424Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=3c8680f3-dd7c-4a57-8ad6-6f0c66aa6207 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.521634587Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145011521619347,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:115433,},InodesUsed:&UInt64Value{Value:65,},},},}" file="go-grpc-middleware/chain.go:25" id=3c8680f3-dd7c-4a57-8ad6-6f0c66aa6207 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.522321796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=14645db0-cfb1-4449-aec4-fbf353fb5886 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.522370207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=14645db0-cfb1-4449-aec4-fbf353fb5886 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:16:51 old-k8s-version-666657 crio[716]: time="2023-11-28 04:16:51.522545077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ecbe1433454e572685cd5dc66e924030a471daa9cc12657a01ee105e3400bfb4,PodSandboxId:fa01086d74baa278002b4dec633701f19824de6c6610d907804f6d53f45b8e89,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144229479578775,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ed59bc28-66f5-44f8-9ff5-d5be69e0049a,},Annotations:map[string]string{io.kubernetes.container.hash: 8e76b141,io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a1a36dd35c0d679bc4feae2b55f722fb9e5d94222ccdfc64f0534bbded07a159,PodSandboxId:b1d5aa0a1633946ba10561b5f4b9861d92fe511d6e02d29bf4070797edb47cf6,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1701144228819892268,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fpjnf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62ef95f3-b9bc-4936-a2e7-398191b6bed5,},Annotations:map[string]string{io.kubernetes.container.hash: c6c5f81f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf61f1f828a44c23c4e5f82409576bf12884717baaec81b789ae3f719e5fec20,PodSandboxId:a7bec5579a274c5d95675941872fa6da07dc3b739bd82cf2f2481c34572f66d3,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1701144227950257258,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-529cg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1c07d1ac-6461-451e-a1bf-4a5493d7d453,},Annotations:map[string]string{io.kubernetes.container.hash: d8dad01b,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"contai
nerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:108096398e4416dcf3dea8ffc0057a1399647e61d177533c5ddf4b01ae3b4ed3,PodSandboxId:fc9ca2bef594fb9c1142d04fbcd8bbdd0d73cddf51be4f963827638d789c6ce2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1701144201610291676,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e35cc95d33d1e82251c247e4c3039876,},Annotations:map[s
tring]string{io.kubernetes.container.hash: f6164f8e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:731933b8d59f91b64564a03132ad4f64897116ded4b3ce17c719c8f3d315fb0a,PodSandboxId:2b639245676bdbaba2d743d027ab2c10c93fa3fc7e7e253cd4d83441e758f2e7,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1701144200561582080,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7376ddb4f190a0ded9394063437
bcb4e,},Annotations:map[string]string{io.kubernetes.container.hash: 8f61a3f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fba9d2d49ee66313b5cc9a6e11f4cc83069cb4e66b9f45340c6a05df4ea1ef2,PodSandboxId:4406cf30e5698951c86d82a2d13e97a26ed67affd0738799478173ca906394ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1701144200605866435,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Ann
otations:map[string]string{io.kubernetes.container.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb7bc23ae3bb9f8c73ebeaeb030df1c2c98b27acdf5ffb7c293a8d16cdc386d0,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1701144199840191427,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:ma
p[string]string{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d92a27c0ce26421760d35ed955d182ad61fa04f534a73d5900d9d04d95af39a4,PodSandboxId:6ac5768cbf19e4cda76203672f4cfd11ede8fc90d4237a11431e3389261205bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_EXITED,CreatedAt:1701143896787387489,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-666657,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b2202267222584f9d33fefa0997a4eab,},Annotations:map[string]str
ing{io.kubernetes.container.hash: 21a40406,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=14645db0-cfb1-4449-aec4-fbf353fb5886 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ecbe1433454e5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   fa01086d74baa       storage-provisioner
	a1a36dd35c0d6       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384   13 minutes ago      Running             kube-proxy                0                   b1d5aa0a16339       kube-proxy-fpjnf
	bf61f1f828a44       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b   13 minutes ago      Running             coredns                   0                   a7bec5579a274       coredns-5644d7b6d9-529cg
	108096398e441       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed   13 minutes ago      Running             etcd                      0                   fc9ca2bef594f       etcd-old-k8s-version-666657
	3fba9d2d49ee6       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a   13 minutes ago      Running             kube-scheduler            0                   4406cf30e5698       kube-scheduler-old-k8s-version-666657
	731933b8d59f9       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d   13 minutes ago      Running             kube-controller-manager   0                   2b639245676bd       kube-controller-manager-old-k8s-version-666657
	eb7bc23ae3bb9       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   13 minutes ago      Running             kube-apiserver            1                   6ac5768cbf19e       kube-apiserver-old-k8s-version-666657
	d92a27c0ce264       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e   18 minutes ago      Exited              kube-apiserver            0                   6ac5768cbf19e       kube-apiserver-old-k8s-version-666657
	
	* 
	* ==> coredns [bf61f1f828a44c23c4e5f82409576bf12884717baaec81b789ae3f719e5fec20] <==
	* .:53
	2023-11-28T04:03:48.303Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
	2023-11-28T04:03:48.303Z [INFO] CoreDNS-1.6.2
	2023-11-28T04:03:48.303Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	[INFO] Reloading
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-666657
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-666657
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=old-k8s-version-666657
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T04_03_31_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 04:03:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 04:16:26 +0000   Tue, 28 Nov 2023 04:03:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 04:16:26 +0000   Tue, 28 Nov 2023 04:03:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 04:16:26 +0000   Tue, 28 Nov 2023 04:03:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 04:16:26 +0000   Tue, 28 Nov 2023 04:03:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.7
	  Hostname:    old-k8s-version-666657
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 0539a6bc6c654b8fa43b48e960f31234
	 System UUID:                0539a6bc-6c65-4b8f-a43b-48e960f31234
	 Boot ID:                    c7565d7d-520e-4ee6-b523-8de18c606738
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (8 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-5644d7b6d9-529cg                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                etcd-old-k8s-version-666657                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-apiserver-old-k8s-version-666657             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-controller-manager-old-k8s-version-666657    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                kube-proxy-fpjnf                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                kube-scheduler-old-k8s-version-666657             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                metrics-server-74d5856cc6-wlfq5                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet, old-k8s-version-666657     Node old-k8s-version-666657 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet, old-k8s-version-666657     Node old-k8s-version-666657 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet, old-k8s-version-666657     Node old-k8s-version-666657 status is now: NodeHasSufficientPID
	  Normal  Starting                 13m                kube-proxy, old-k8s-version-666657  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Nov28 03:57] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069990] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.757104] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.365222] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154274] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.624689] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.901266] systemd-fstab-generator[643]: Ignoring "noauto" for root device
	[  +0.119101] systemd-fstab-generator[654]: Ignoring "noauto" for root device
	[  +0.224542] systemd-fstab-generator[667]: Ignoring "noauto" for root device
	[  +0.152090] systemd-fstab-generator[678]: Ignoring "noauto" for root device
	[  +0.263743] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[Nov28 03:58] systemd-fstab-generator[1033]: Ignoring "noauto" for root device
	[  +0.423782] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +24.918472] kauditd_printk_skb: 13 callbacks suppressed
	[Nov28 03:59] kauditd_printk_skb: 4 callbacks suppressed
	[Nov28 04:03] systemd-fstab-generator[3102]: Ignoring "noauto" for root device
	[  +1.171577] kauditd_printk_skb: 6 callbacks suppressed
	[ +34.319451] kauditd_printk_skb: 13 callbacks suppressed
	
	* 
	* ==> etcd [108096398e4416dcf3dea8ffc0057a1399647e61d177533c5ddf4b01ae3b4ed3] <==
	* 2023-11-28 04:03:21.759200 I | raft: newRaft 856b77cd5251110c [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
	2023-11-28 04:03:21.759204 I | raft: 856b77cd5251110c became follower at term 1
	2023-11-28 04:03:21.766569 W | auth: simple token is not cryptographically signed
	2023-11-28 04:03:21.771452 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2023-11-28 04:03:21.772839 I | etcdserver: 856b77cd5251110c as single-node; fast-forwarding 9 ticks (election ticks 10)
	2023-11-28 04:03:21.773299 I | etcdserver/membership: added member 856b77cd5251110c [https://192.168.50.7:2380] to cluster b162f841703ff885
	2023-11-28 04:03:21.773633 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-11-28 04:03:21.773818 I | embed: listening for metrics on http://192.168.50.7:2381
	2023-11-28 04:03:21.773986 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-11-28 04:03:22.259888 I | raft: 856b77cd5251110c is starting a new election at term 1
	2023-11-28 04:03:22.260086 I | raft: 856b77cd5251110c became candidate at term 2
	2023-11-28 04:03:22.260210 I | raft: 856b77cd5251110c received MsgVoteResp from 856b77cd5251110c at term 2
	2023-11-28 04:03:22.260240 I | raft: 856b77cd5251110c became leader at term 2
	2023-11-28 04:03:22.260334 I | raft: raft.node: 856b77cd5251110c elected leader 856b77cd5251110c at term 2
	2023-11-28 04:03:22.260928 I | etcdserver: published {Name:old-k8s-version-666657 ClientURLs:[https://192.168.50.7:2379]} to cluster b162f841703ff885
	2023-11-28 04:03:22.261118 I | embed: ready to serve client requests
	2023-11-28 04:03:22.261140 I | embed: ready to serve client requests
	2023-11-28 04:03:22.262342 I | embed: serving client requests on 127.0.0.1:2379
	2023-11-28 04:03:22.262400 I | embed: serving client requests on 192.168.50.7:2379
	2023-11-28 04:03:22.262482 I | etcdserver: setting up the initial cluster version to 3.3
	2023-11-28 04:03:22.263533 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-11-28 04:03:22.263649 I | etcdserver/api: enabled capabilities for version 3.3
	2023-11-28 04:03:47.881360 W | etcdserver: read-only range request "key:\"/registry/deployments/kube-system/metrics-server\" " with result "range_response_count:0 size:5" took too long (296.317166ms) to execute
	2023-11-28 04:13:23.038054 I | mvcc: store.index: compact 661
	2023-11-28 04:13:23.040618 I | mvcc: finished scheduled compaction at 661 (took 2.008371ms)
	
	* 
	* ==> kernel <==
	*  04:16:51 up 19 min,  0 users,  load average: 0.13, 0.21, 0.21
	Linux old-k8s-version-666657 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [d92a27c0ce26421760d35ed955d182ad61fa04f534a73d5900d9d04d95af39a4] <==
	* W1128 04:03:16.402852       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.402894       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.402969       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403006       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403085       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403792       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403795       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403819       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403838       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403890       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403955       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404016       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404073       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404127       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404155       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404240       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404267       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404292       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404325       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404353       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.404414       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403858       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:16.403874       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:17.688285       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	W1128 04:03:17.696531       1 clientconn.go:1120] grpc: addrConn.createTransport failed to connect to {https://127.0.0.1:2379 0  <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	
	* 
	* ==> kube-apiserver [eb7bc23ae3bb9f8c73ebeaeb030df1c2c98b27acdf5ffb7c293a8d16cdc386d0] <==
	* I1128 04:09:27.250495       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 04:09:27.250848       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 04:09:27.250944       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:09:27.251017       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:11:27.251383       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 04:11:27.251532       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 04:11:27.251610       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:11:27.251617       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:13:27.253530       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 04:13:27.254049       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 04:13:27.254223       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:13:27.254266       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:14:27.254797       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 04:14:27.254915       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 04:14:27.254979       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:14:27.255001       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:16:27.255491       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W1128 04:16:27.255982       1 handler_proxy.go:99] no RequestInfo found in the context
	E1128 04:16:27.256114       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:16:27.256194       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [731933b8d59f91b64564a03132ad4f64897116ded4b3ce17c719c8f3d315fb0a] <==
	* E1128 04:10:19.823511       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:10:42.616435       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:10:50.075993       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:11:14.618764       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:11:20.328246       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:11:46.620900       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:11:50.580520       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:12:18.623148       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:12:20.832572       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:12:50.625003       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:12:51.084628       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E1128 04:13:21.336566       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:13:22.627077       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:13:51.588876       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:13:54.629002       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:14:21.840824       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:14:26.631151       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:14:52.093296       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:14:58.633285       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:15:22.345559       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:15:30.635844       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:15:52.598617       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:16:02.637886       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E1128 04:16:22.851978       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W1128 04:16:34.640104       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [a1a36dd35c0d679bc4feae2b55f722fb9e5d94222ccdfc64f0534bbded07a159] <==
	* W1128 04:03:49.179393       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I1128 04:03:49.189741       1 node.go:135] Successfully retrieved node IP: 192.168.50.7
	I1128 04:03:49.189832       1 server_others.go:149] Using iptables Proxier.
	I1128 04:03:49.191007       1 server.go:529] Version: v1.16.0
	I1128 04:03:49.194972       1 config.go:131] Starting endpoints config controller
	I1128 04:03:49.195102       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I1128 04:03:49.196030       1 config.go:313] Starting service config controller
	I1128 04:03:49.196098       1 shared_informer.go:197] Waiting for caches to sync for service config
	I1128 04:03:49.299135       1 shared_informer.go:204] Caches are synced for endpoints config 
	I1128 04:03:49.299403       1 shared_informer.go:204] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [3fba9d2d49ee66313b5cc9a6e11f4cc83069cb4e66b9f45340c6a05df4ea1ef2] <==
	* W1128 04:03:26.247819       1 authentication.go:79] Authentication is disabled
	I1128 04:03:26.247952       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I1128 04:03:26.249523       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	E1128 04:03:26.307045       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 04:03:26.307185       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 04:03:26.307288       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 04:03:26.307356       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 04:03:26.311269       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 04:03:26.311363       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 04:03:26.311409       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 04:03:26.311455       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 04:03:26.315409       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 04:03:26.316940       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 04:03:26.316960       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 04:03:27.311079       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 04:03:27.317262       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1128 04:03:27.317398       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1128 04:03:27.318284       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 04:03:27.319837       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 04:03:27.320362       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 04:03:27.321609       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1128 04:03:27.325998       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1128 04:03:27.327193       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1128 04:03:27.329373       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 04:03:27.330609       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 03:57:45 UTC, ends at Tue 2023-11-28 04:16:52 UTC. --
	Nov 28 04:12:33 old-k8s-version-666657 kubelet[3121]: E1128 04:12:33.352519    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:12:47 old-k8s-version-666657 kubelet[3121]: E1128 04:12:47.353946    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:13:01 old-k8s-version-666657 kubelet[3121]: E1128 04:13:01.352786    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:13:12 old-k8s-version-666657 kubelet[3121]: E1128 04:13:12.352355    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:13:19 old-k8s-version-666657 kubelet[3121]: E1128 04:13:19.436620    3121 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Nov 28 04:13:27 old-k8s-version-666657 kubelet[3121]: E1128 04:13:27.352782    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:13:40 old-k8s-version-666657 kubelet[3121]: E1128 04:13:40.354864    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:13:52 old-k8s-version-666657 kubelet[3121]: E1128 04:13:52.353067    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:14:03 old-k8s-version-666657 kubelet[3121]: E1128 04:14:03.354303    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:14:18 old-k8s-version-666657 kubelet[3121]: E1128 04:14:18.352496    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:14:31 old-k8s-version-666657 kubelet[3121]: E1128 04:14:31.353198    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:14:44 old-k8s-version-666657 kubelet[3121]: E1128 04:14:44.397341    3121 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 28 04:14:44 old-k8s-version-666657 kubelet[3121]: E1128 04:14:44.397483    3121 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 28 04:14:44 old-k8s-version-666657 kubelet[3121]: E1128 04:14:44.397566    3121 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Nov 28 04:14:44 old-k8s-version-666657 kubelet[3121]: E1128 04:14:44.397614    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Nov 28 04:14:59 old-k8s-version-666657 kubelet[3121]: E1128 04:14:59.353170    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:15:10 old-k8s-version-666657 kubelet[3121]: E1128 04:15:10.352341    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:15:23 old-k8s-version-666657 kubelet[3121]: E1128 04:15:23.352463    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:15:34 old-k8s-version-666657 kubelet[3121]: E1128 04:15:34.352580    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:15:47 old-k8s-version-666657 kubelet[3121]: E1128 04:15:47.353203    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:16:02 old-k8s-version-666657 kubelet[3121]: E1128 04:16:02.352660    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:16:13 old-k8s-version-666657 kubelet[3121]: E1128 04:16:13.352167    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:16:25 old-k8s-version-666657 kubelet[3121]: E1128 04:16:25.352309    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:16:37 old-k8s-version-666657 kubelet[3121]: E1128 04:16:37.352246    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Nov 28 04:16:49 old-k8s-version-666657 kubelet[3121]: E1128 04:16:49.352608    3121 pod_workers.go:191] Error syncing pod 64cff3b8-b297-425e-91bc-26e7ca091bfc ("metrics-server-74d5856cc6-wlfq5_kube-system(64cff3b8-b297-425e-91bc-26e7ca091bfc)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [ecbe1433454e572685cd5dc66e924030a471daa9cc12657a01ee105e3400bfb4] <==
	* I1128 04:03:49.777894       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 04:03:49.788577       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 04:03:49.788788       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 04:03:49.797356       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 04:03:49.798045       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"56d67361-7fdc-4ab6-9363-0dc1d8dccb58", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-666657_04924d75-d25d-4ae8-ac80-12122f51609e became leader
	I1128 04:03:49.798111       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-666657_04924d75-d25d-4ae8-ac80-12122f51609e!
	I1128 04:03:49.898206       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-666657_04924d75-d25d-4ae8-ac80-12122f51609e!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-666657 -n old-k8s-version-666657
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-666657 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-wlfq5
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-666657 describe pod metrics-server-74d5856cc6-wlfq5
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-666657 describe pod metrics-server-74d5856cc6-wlfq5: exit status 1 (67.044178ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-wlfq5" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-666657 describe pod metrics-server-74d5856cc6-wlfq5: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (169.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (329.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1128 04:21:58.569327  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 04:22:05.903299  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 04:22:21.190184  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/no-preload-222348/client.crt: no such file or directory
E1128 04:22:21.854284  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 04:22:22.071647  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 04:22:25.136431  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/default-k8s-diff-port-725962/client.crt: no such file or directory
E1128 04:22:55.195936  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 04:23:13.304820  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 04:23:34.223295  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 04:23:43.673899  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 04:24:06.690428  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/old-k8s-version-666657/client.crt: no such file or directory
E1128 04:24:18.807597  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 04:24:19.024992  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 04:24:20.886858  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 04:24:26.533032  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 04:24:34.376689  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/old-k8s-version-666657/client.crt: no such file or directory
E1128 04:24:37.345330  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/no-preload-222348/client.crt: no such file or directory
E1128 04:24:41.292840  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/default-k8s-diff-port-725962/client.crt: no such file or directory
E1128 04:25:01.614924  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 04:25:05.031155  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/no-preload-222348/client.crt: no such file or directory
E1128 04:25:08.949271  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 04:25:08.977523  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/default-k8s-diff-port-725962/client.crt: no such file or directory
E1128 04:25:10.257643  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 04:25:58.240263  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 04:26:17.838881  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 04:26:23.484257  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 04:26:58.568978  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 04:27:05.903425  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-672176 -n embed-certs-672176
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-11-28 04:27:19.568959411 +0000 UTC m=+6386.743933559
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-672176 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-672176 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.196µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-672176 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-672176 -n embed-certs-672176
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-672176 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-672176 logs -n 25: (1.219939924s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-725962  | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC | 28 Nov 23 03:49 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:49 UTC |                     |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-666657             | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-666657                              | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC | 28 Nov 23 04:05 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-644411                  | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:51 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-644411 --memory=2200 --alsologtostderr   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 03:57 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-222348                  | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-725962       | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.0                      |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 03:52 UTC | 28 Nov 23 04:02 UTC |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-644411 sudo                              | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | crictl images -o json                                  |                              |         |         |                     |                     |
	| pause   | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| unpause | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | --alsologtostderr -v=1                                 |                              |         |         |                     |                     |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p newest-cni-644411                                   | newest-cni-644411            | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	| delete  | -p                                                     | disable-driver-mounts-846967 | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:57 UTC |
	|         | disable-driver-mounts-846967                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:57 UTC | 28 Nov 23 03:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-672176            | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC | 28 Nov 23 03:59 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 03:59 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-672176                 | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-672176                                  | embed-certs-672176           | jenkins | v1.32.0 | 28 Nov 23 04:02 UTC | 28 Nov 23 04:12 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-666657                              | old-k8s-version-666657       | jenkins | v1.32.0 | 28 Nov 23 04:16 UTC | 28 Nov 23 04:16 UTC |
	| delete  | -p no-preload-222348                                   | no-preload-222348            | jenkins | v1.32.0 | 28 Nov 23 04:17 UTC | 28 Nov 23 04:17 UTC |
	| delete  | -p                                                     | default-k8s-diff-port-725962 | jenkins | v1.32.0 | 28 Nov 23 04:17 UTC | 28 Nov 23 04:17 UTC |
	|         | default-k8s-diff-port-725962                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 04:02:20
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 04:02:20.007599  388252 out.go:296] Setting OutFile to fd 1 ...
	I1128 04:02:20.007767  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.007777  388252 out.go:309] Setting ErrFile to fd 2...
	I1128 04:02:20.007785  388252 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 04:02:20.008096  388252 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 04:02:20.008843  388252 out.go:303] Setting JSON to false
	I1128 04:02:20.010310  388252 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9890,"bootTime":1701134250,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 04:02:20.010407  388252 start.go:138] virtualization: kvm guest
	I1128 04:02:20.013087  388252 out.go:177] * [embed-certs-672176] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 04:02:20.014598  388252 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 04:02:20.014660  388252 notify.go:220] Checking for updates...
	I1128 04:02:20.015986  388252 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 04:02:20.017211  388252 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:20.018519  388252 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 04:02:20.019955  388252 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 04:02:20.021210  388252 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 04:02:20.023191  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:02:20.023899  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.023964  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.042617  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36861
	I1128 04:02:20.043095  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.043705  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.043736  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.044107  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.044324  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.044601  388252 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 04:02:20.044913  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.044954  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.060572  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34541
	I1128 04:02:20.061089  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.061641  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.061662  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.062005  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.062271  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.099905  388252 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 04:02:20.101319  388252 start.go:298] selected driver: kvm2
	I1128 04:02:20.101341  388252 start.go:902] validating driver "kvm2" against &{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDi
sks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.101493  388252 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 04:02:20.102582  388252 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.102689  388252 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 04:02:20.119550  388252 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 04:02:20.120061  388252 start_flags.go:931] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1128 04:02:20.120161  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:02:20.120182  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:20.120200  388252 start_flags.go:323] config:
	{Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikub
e-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:02:20.120453  388252 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 04:02:20.122000  388252 out.go:177] * Starting control plane node embed-certs-672176 in cluster embed-certs-672176
	I1128 04:02:20.123169  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:02:20.123226  388252 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 04:02:20.123238  388252 cache.go:56] Caching tarball of preloaded images
	I1128 04:02:20.123336  388252 preload.go:174] Found /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1128 04:02:20.123349  388252 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I1128 04:02:20.123483  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:02:20.123764  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:02:20.123841  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 53.317µs
	I1128 04:02:20.123861  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:02:20.123898  388252 fix.go:54] fixHost starting: 
	I1128 04:02:20.124308  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:20.124355  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:20.139372  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35505
	I1128 04:02:20.139973  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:20.140502  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:02:20.140524  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:20.141047  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:20.141273  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.141507  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:02:20.143177  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Running err=<nil>
	W1128 04:02:20.143200  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:02:20.144930  388252 out.go:177] * Updating the running kvm2 "embed-certs-672176" VM ...
	I1128 04:02:17.125019  385277 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:17.142364  385277 api_server.go:72] duration metric: took 4m14.849353437s to wait for apiserver process to appear ...
	I1128 04:02:17.142392  385277 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:17.142425  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:17.142480  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:17.183951  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:17.183975  385277 cri.go:89] found id: ""
	I1128 04:02:17.183984  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:17.184035  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.188897  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:17.188968  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:17.224077  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:17.224105  385277 cri.go:89] found id: ""
	I1128 04:02:17.224115  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:17.224171  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.228613  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:17.228693  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:17.263866  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:17.263895  385277 cri.go:89] found id: ""
	I1128 04:02:17.263906  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:17.263973  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.268122  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:17.268187  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:17.311145  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.311176  385277 cri.go:89] found id: ""
	I1128 04:02:17.311185  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:17.311245  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.315277  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:17.315355  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:17.352737  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:17.352763  385277 cri.go:89] found id: ""
	I1128 04:02:17.352773  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:17.352839  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.357033  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:17.357117  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:17.394844  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:17.394880  385277 cri.go:89] found id: ""
	I1128 04:02:17.394892  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:17.394949  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.399309  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:17.399382  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:17.441719  385277 cri.go:89] found id: ""
	I1128 04:02:17.441755  385277 logs.go:284] 0 containers: []
	W1128 04:02:17.441763  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:17.441769  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:17.441821  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:17.485353  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:17.485378  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:17.485383  385277 cri.go:89] found id: ""
	I1128 04:02:17.485391  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:17.485445  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.489781  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:17.493710  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:17.493734  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:17.552558  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:17.552596  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:17.570454  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:17.570484  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:17.617817  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:17.617855  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:18.071032  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:18.071076  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:18.188437  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:18.188477  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:18.246729  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:18.246777  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:18.287299  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:18.287345  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:18.324855  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:18.324903  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:18.378328  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:18.378370  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:18.421332  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:18.421375  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:18.467856  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:18.467905  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:18.528763  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:18.528817  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:19.035039  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:21.037085  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:23.535684  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:20.146477  388252 machine.go:88] provisioning docker machine ...
	I1128 04:02:20.146512  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:02:20.146758  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.146926  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:02:20.146949  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:02:20.147164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:02:20.150346  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.150885  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 04:58:10 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:02:20.150920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:02:20.151194  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:02:20.151404  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:02:20.151768  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:02:20.151998  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:02:20.152482  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:02:20.152501  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:02:23.005224  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:21.087291  385277 api_server.go:253] Checking apiserver healthz at https://192.168.61.13:8444/healthz ...
	I1128 04:02:21.094451  385277 api_server.go:279] https://192.168.61.13:8444/healthz returned 200:
	ok
	I1128 04:02:21.096308  385277 api_server.go:141] control plane version: v1.28.4
	I1128 04:02:21.096333  385277 api_server.go:131] duration metric: took 3.953933505s to wait for apiserver health ...
	I1128 04:02:21.096343  385277 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:21.096371  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1128 04:02:21.096431  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1128 04:02:21.144869  385277 cri.go:89] found id: "d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:21.144908  385277 cri.go:89] found id: ""
	I1128 04:02:21.144920  385277 logs.go:284] 1 containers: [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be]
	I1128 04:02:21.144987  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.149714  385277 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1128 04:02:21.149790  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1128 04:02:21.192196  385277 cri.go:89] found id: "39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.192230  385277 cri.go:89] found id: ""
	I1128 04:02:21.192242  385277 logs.go:284] 1 containers: [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58]
	I1128 04:02:21.192307  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.196964  385277 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1128 04:02:21.197040  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1128 04:02:21.234749  385277 cri.go:89] found id: "4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:21.234775  385277 cri.go:89] found id: ""
	I1128 04:02:21.234785  385277 logs.go:284] 1 containers: [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371]
	I1128 04:02:21.234845  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.239486  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1128 04:02:21.239574  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1128 04:02:21.275950  385277 cri.go:89] found id: "09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.275980  385277 cri.go:89] found id: ""
	I1128 04:02:21.275991  385277 logs.go:284] 1 containers: [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe]
	I1128 04:02:21.276069  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.280518  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1128 04:02:21.280591  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1128 04:02:21.325941  385277 cri.go:89] found id: "3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:21.325967  385277 cri.go:89] found id: ""
	I1128 04:02:21.325977  385277 logs.go:284] 1 containers: [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7]
	I1128 04:02:21.326038  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.330959  385277 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1128 04:02:21.331031  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1128 04:02:21.376605  385277 cri.go:89] found id: "59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.376636  385277 cri.go:89] found id: ""
	I1128 04:02:21.376648  385277 logs.go:284] 1 containers: [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639]
	I1128 04:02:21.376717  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.382609  385277 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1128 04:02:21.382686  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1128 04:02:21.434065  385277 cri.go:89] found id: ""
	I1128 04:02:21.434102  385277 logs.go:284] 0 containers: []
	W1128 04:02:21.434113  385277 logs.go:286] No container was found matching "kindnet"
	I1128 04:02:21.434121  385277 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1128 04:02:21.434191  385277 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1128 04:02:21.475230  385277 cri.go:89] found id: "1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.475265  385277 cri.go:89] found id: "ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.475272  385277 cri.go:89] found id: ""
	I1128 04:02:21.475300  385277 logs.go:284] 2 containers: [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e]
	I1128 04:02:21.475367  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.479918  385277 ssh_runner.go:195] Run: which crictl
	I1128 04:02:21.483989  385277 logs.go:123] Gathering logs for etcd [39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58] ...
	I1128 04:02:21.484014  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39b2c5787e96c4659bdce46a43c4f9e1b6ef0fc1fd123edf191b3f64693e9e58"
	I1128 04:02:21.550040  385277 logs.go:123] Gathering logs for storage-provisioner [1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a] ...
	I1128 04:02:21.550086  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1806bf0461d3ccb7910ba4ed97098263dcf45c447eac5162aa3972bda6d9517a"
	I1128 04:02:21.604802  385277 logs.go:123] Gathering logs for container status ...
	I1128 04:02:21.604854  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1128 04:02:21.667187  385277 logs.go:123] Gathering logs for kubelet ...
	I1128 04:02:21.667230  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1128 04:02:21.735542  385277 logs.go:123] Gathering logs for kube-scheduler [09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe] ...
	I1128 04:02:21.735591  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 09e3428759987fafaec930921fbe14db4be31cdf2a59f20384684f8e2096a5fe"
	I1128 04:02:21.778554  385277 logs.go:123] Gathering logs for kube-controller-manager [59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639] ...
	I1128 04:02:21.778600  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59767f5d5ca26d31ad2f2b5ba537ae572b60a7443c0a1bc8dff5d88cfa0b4639"
	I1128 04:02:21.841737  385277 logs.go:123] Gathering logs for storage-provisioner [ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e] ...
	I1128 04:02:21.841776  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ef25aa6706867d359eafb31c0c63e1e4418dc283541111b17ff782592cdaa05e"
	I1128 04:02:21.885454  385277 logs.go:123] Gathering logs for CRI-O ...
	I1128 04:02:21.885494  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1128 04:02:22.264498  385277 logs.go:123] Gathering logs for dmesg ...
	I1128 04:02:22.264545  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1128 04:02:22.281694  385277 logs.go:123] Gathering logs for describe nodes ...
	I1128 04:02:22.281727  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1128 04:02:22.441500  385277 logs.go:123] Gathering logs for kube-apiserver [d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be] ...
	I1128 04:02:22.441548  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d962ca3c6d6a3a501e430d570758f4af2267bfd79998daa06fb8d96261cb42be"
	I1128 04:02:22.516971  385277 logs.go:123] Gathering logs for coredns [4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371] ...
	I1128 04:02:22.517015  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4f1b83cb6065a80e8cb56a9f4a563a1f7c16c2dd694aa6dfefc3722725b4e371"
	I1128 04:02:22.570642  385277 logs.go:123] Gathering logs for kube-proxy [3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7] ...
	I1128 04:02:22.570676  385277 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3c249ebac5ace5941b1120b39d0989af5ede59d6b87a250703c4aafcc7baa5e7"
	I1128 04:02:25.123556  385277 system_pods.go:59] 8 kube-system pods found
	I1128 04:02:25.123590  385277 system_pods.go:61] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.123595  385277 system_pods.go:61] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.123600  385277 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.123604  385277 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.123608  385277 system_pods.go:61] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.123613  385277 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.123620  385277 system_pods.go:61] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.123626  385277 system_pods.go:61] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.123633  385277 system_pods.go:74] duration metric: took 4.027284696s to wait for pod list to return data ...
	I1128 04:02:25.123641  385277 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:25.127575  385277 default_sa.go:45] found service account: "default"
	I1128 04:02:25.127601  385277 default_sa.go:55] duration metric: took 3.954108ms for default service account to be created ...
	I1128 04:02:25.127611  385277 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:25.136183  385277 system_pods.go:86] 8 kube-system pods found
	I1128 04:02:25.136217  385277 system_pods.go:89] "coredns-5dd5756b68-5pf9p" [ae5e9fbf-4e4a-46f2-9ef7-8e4975ff9f5f] Running
	I1128 04:02:25.136224  385277 system_pods.go:89] "etcd-default-k8s-diff-port-725962" [abff41ae-f288-4d54-adf6-8a870facceb6] Running
	I1128 04:02:25.136232  385277 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-725962" [8c480314-719e-4e83-bfa7-0b1b474b9aa6] Running
	I1128 04:02:25.136240  385277 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-725962" [1ddfb52e-646f-4c19-901c-cf55418b57c3] Running
	I1128 04:02:25.136246  385277 system_pods.go:89] "kube-proxy-sp9nc" [b54c0c14-5531-417f-8ce9-547c4bc9c9cf] Running
	I1128 04:02:25.136253  385277 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-725962" [31d78690-4f1f-4993-b9a1-33599365e4db] Running
	I1128 04:02:25.136266  385277 system_pods.go:89] "metrics-server-57f55c9bc5-9bqg8" [48d11dc2-ea03-4b2d-ac8b-afa0c6273c80] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:25.136280  385277 system_pods.go:89] "storage-provisioner" [074eb0a7-45ef-4b63-9068-e061637207f7] Running
	I1128 04:02:25.136291  385277 system_pods.go:126] duration metric: took 8.673655ms to wait for k8s-apps to be running ...
	I1128 04:02:25.136303  385277 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:25.136362  385277 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:25.158811  385277 system_svc.go:56] duration metric: took 22.495299ms WaitForService to wait for kubelet.
	I1128 04:02:25.158862  385277 kubeadm.go:581] duration metric: took 4m22.865858856s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:25.158891  385277 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:25.162679  385277 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:25.162706  385277 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:25.162717  385277 node_conditions.go:105] duration metric: took 3.821419ms to run NodePressure ...
	I1128 04:02:25.162745  385277 start.go:228] waiting for startup goroutines ...
	I1128 04:02:25.162751  385277 start.go:233] waiting for cluster config update ...
	I1128 04:02:25.162760  385277 start.go:242] writing updated cluster config ...
	I1128 04:02:25.163075  385277 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:25.217545  385277 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:02:25.219820  385277 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-725962" cluster and "default" namespace by default
	I1128 04:02:28.624093  385190 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.0
	I1128 04:02:28.624173  385190 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:02:28.624301  385190 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:02:28.624444  385190 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:02:28.624561  385190 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:02:28.624641  385190 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:02:28.626365  385190 out.go:204]   - Generating certificates and keys ...
	I1128 04:02:28.626465  385190 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:02:28.626548  385190 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:02:28.626645  385190 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:02:28.626719  385190 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:02:28.626828  385190 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:02:28.626908  385190 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:02:28.626985  385190 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:02:28.627057  385190 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:02:28.627166  385190 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:02:28.627259  385190 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:02:28.627315  385190 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:02:28.627384  385190 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:02:28.627442  385190 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:02:28.627513  385190 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1128 04:02:28.627573  385190 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:02:28.627653  385190 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:02:28.627717  385190 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:02:28.627821  385190 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:02:28.627901  385190 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:02:28.629387  385190 out.go:204]   - Booting up control plane ...
	I1128 04:02:28.629496  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:02:28.629593  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:02:28.629701  385190 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:02:28.629825  385190 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:02:28.629933  385190 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:02:28.629985  385190 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:02:28.630182  385190 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:02:28.630292  385190 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502940 seconds
	I1128 04:02:28.630437  385190 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:02:28.630586  385190 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:02:28.630656  385190 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:02:28.630869  385190 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-222348 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:02:28.630937  385190 kubeadm.go:322] [bootstrap-token] Using token: 7e8qc3.nnytwd8q8fl84l6i
	I1128 04:02:28.632838  385190 out.go:204]   - Configuring RBAC rules ...
	I1128 04:02:28.632987  385190 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:02:28.633108  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:02:28.633273  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:02:28.633455  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:02:28.633635  385190 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:02:28.633737  385190 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:02:28.633909  385190 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:02:28.633964  385190 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:02:28.634003  385190 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:02:28.634009  385190 kubeadm.go:322] 
	I1128 04:02:28.634063  385190 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:02:28.634070  385190 kubeadm.go:322] 
	I1128 04:02:28.634130  385190 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:02:28.634136  385190 kubeadm.go:322] 
	I1128 04:02:28.634157  385190 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:02:28.634205  385190 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:02:28.634250  385190 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:02:28.634256  385190 kubeadm.go:322] 
	I1128 04:02:28.634333  385190 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:02:28.634349  385190 kubeadm.go:322] 
	I1128 04:02:28.634438  385190 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:02:28.634462  385190 kubeadm.go:322] 
	I1128 04:02:28.634525  385190 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:02:28.634659  385190 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:02:28.634759  385190 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:02:28.634773  385190 kubeadm.go:322] 
	I1128 04:02:28.634879  385190 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:02:28.634957  385190 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:02:28.634965  385190 kubeadm.go:322] 
	I1128 04:02:28.635041  385190 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635153  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:02:28.635188  385190 kubeadm.go:322] 	--control-plane 
	I1128 04:02:28.635197  385190 kubeadm.go:322] 
	I1128 04:02:28.635304  385190 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:02:28.635313  385190 kubeadm.go:322] 
	I1128 04:02:28.635411  385190 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7e8qc3.nnytwd8q8fl84l6i \
	I1128 04:02:28.635541  385190 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:02:28.635574  385190 cni.go:84] Creating CNI manager for ""
	I1128 04:02:28.635588  385190 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:02:28.637435  385190 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:02:28.638928  385190 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:02:25.536491  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:28.037478  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:26.077199  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:28.654704  385190 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:02:28.714435  385190 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:02:28.714516  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.714524  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=no-preload-222348 minikube.k8s.io/updated_at=2023_11_28T04_02_28_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:28.790761  385190 ops.go:34] apiserver oom_adj: -16
	I1128 04:02:28.965788  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.082351  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:29.680586  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.181037  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.680560  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.181252  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:31.680411  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.180401  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:32.681195  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:33.180867  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:30.535026  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.536808  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:32.161184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:33.680538  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.180615  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:34.680359  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.180746  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.681099  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.180588  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:36.681059  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.180397  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:37.680629  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:38.180710  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:35.036694  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:37.535611  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:35.229145  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:38.681268  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.180491  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:39.680634  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.180761  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:40.681057  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.180983  385190 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:02:41.309439  385190 kubeadm.go:1081] duration metric: took 12.594981015s to wait for elevateKubeSystemPrivileges.
	I1128 04:02:41.309479  385190 kubeadm.go:406] StartCluster complete in 5m13.943228432s
	I1128 04:02:41.309503  385190 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.309588  385190 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:02:41.311897  385190 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:02:41.312215  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:02:41.312322  385190 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:02:41.312407  385190 addons.go:69] Setting storage-provisioner=true in profile "no-preload-222348"
	I1128 04:02:41.312422  385190 addons.go:69] Setting default-storageclass=true in profile "no-preload-222348"
	I1128 04:02:41.312436  385190 addons.go:231] Setting addon storage-provisioner=true in "no-preload-222348"
	I1128 04:02:41.312438  385190 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-222348"
	W1128 04:02:41.312445  385190 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:02:41.312446  385190 addons.go:69] Setting metrics-server=true in profile "no-preload-222348"
	I1128 04:02:41.312462  385190 config.go:182] Loaded profile config "no-preload-222348": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.0
	I1128 04:02:41.312475  385190 addons.go:231] Setting addon metrics-server=true in "no-preload-222348"
	W1128 04:02:41.312485  385190 addons.go:240] addon metrics-server should already be in state true
	I1128 04:02:41.312510  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312537  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312926  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312960  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.312985  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.312956  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.328695  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45735
	I1128 04:02:41.328709  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44013
	I1128 04:02:41.328795  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39549
	I1128 04:02:41.332632  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332652  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.332640  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.333191  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333213  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333323  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333340  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.333358  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333344  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.333610  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333774  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.333826  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.334168  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334182  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.334399  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.334587  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.334602  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.338095  385190 addons.go:231] Setting addon default-storageclass=true in "no-preload-222348"
	W1128 04:02:41.338117  385190 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:02:41.338150  385190 host.go:66] Checking if "no-preload-222348" exists ...
	I1128 04:02:41.338562  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.338582  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.351757  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I1128 04:02:41.352462  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.353001  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.353018  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.353432  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.353689  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.354246  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43911
	I1128 04:02:41.354837  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.355324  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.355342  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.355772  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.356535  385190 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:02:41.356577  385190 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:02:41.356832  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33321
	I1128 04:02:41.357390  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.357499  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.359297  385190 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:02:41.357865  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.360511  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.360704  385190 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.360715  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:02:41.360729  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.361075  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.361268  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.363830  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.365783  385190 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:02:41.364607  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.365384  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.367315  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:02:41.367328  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:02:41.367348  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.367398  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.367414  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.367426  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.368068  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.368272  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.370196  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370716  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.370740  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.370820  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.371038  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.371144  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.371280  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.374445  385190 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1128 04:02:41.374734  385190 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:02:41.375079  385190 main.go:141] libmachine: Using API Version  1
	I1128 04:02:41.375089  385190 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:02:41.375305  385190 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:02:41.375403  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetState
	I1128 04:02:41.376672  385190 main.go:141] libmachine: (no-preload-222348) Calling .DriverName
	I1128 04:02:41.376916  385190 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.376931  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:02:41.376944  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHHostname
	I1128 04:02:41.379448  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379800  385190 main.go:141] libmachine: (no-preload-222348) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6e:9d:ee", ip: ""} in network mk-no-preload-222348: {Iface:virbr1 ExpiryTime:2023-11-28 04:56:57 +0000 UTC Type:0 Mac:52:54:00:6e:9d:ee Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:no-preload-222348 Clientid:01:52:54:00:6e:9d:ee}
	I1128 04:02:41.379839  385190 main.go:141] libmachine: (no-preload-222348) DBG | domain no-preload-222348 has defined IP address 192.168.39.106 and MAC address 52:54:00:6e:9d:ee in network mk-no-preload-222348
	I1128 04:02:41.379946  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHPort
	I1128 04:02:41.380070  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHKeyPath
	I1128 04:02:41.380154  385190 main.go:141] libmachine: (no-preload-222348) Calling .GetSSHUsername
	I1128 04:02:41.380223  385190 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/no-preload-222348/id_rsa Username:docker}
	I1128 04:02:41.388696  385190 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-222348" context rescaled to 1 replicas
	I1128 04:02:41.388733  385190 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8443 KubernetesVersion:v1.29.0-rc.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:02:41.390613  385190 out.go:177] * Verifying Kubernetes components...
	I1128 04:02:41.391975  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:41.644941  385190 node_ready.go:35] waiting up to 6m0s for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.645100  385190 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:02:41.665031  385190 node_ready.go:49] node "no-preload-222348" has status "Ready":"True"
	I1128 04:02:41.665067  385190 node_ready.go:38] duration metric: took 20.088639ms waiting for node "no-preload-222348" to be "Ready" ...
	I1128 04:02:41.665082  385190 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:41.682673  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:41.759560  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:02:41.759595  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:02:41.905887  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:02:41.922496  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:02:41.955296  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:02:41.955331  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:02:42.013986  385190 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.014023  385190 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:02:42.104936  385190 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:02:42.373507  385190 start.go:926] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1128 04:02:43.023075  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.117131952s)
	I1128 04:02:43.023099  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.100573063s)
	I1128 04:02:43.023137  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023153  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023217  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023235  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023471  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023491  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023502  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023510  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023615  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023659  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023682  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.023693  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023704  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.023724  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.023704  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.023898  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.023917  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.116124  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.116162  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.116591  385190 main.go:141] libmachine: (no-preload-222348) DBG | Closing plugin on server side
	I1128 04:02:43.116636  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.116648  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.309617  385190 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.204630924s)
	I1128 04:02:43.309676  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.309689  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310010  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310031  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310043  385190 main.go:141] libmachine: Making call to close driver server
	I1128 04:02:43.310051  385190 main.go:141] libmachine: (no-preload-222348) Calling .Close
	I1128 04:02:43.310313  385190 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:02:43.310331  385190 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:02:43.310343  385190 addons.go:467] Verifying addon metrics-server=true in "no-preload-222348"
	I1128 04:02:43.312005  385190 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1128 04:02:43.313519  385190 addons.go:502] enable addons completed in 2.001198411s: enabled=[storage-provisioner default-storageclass metrics-server]
	I1128 04:02:39.536572  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:42.036107  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:41.309196  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:44.385117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:43.735794  385190 pod_ready.go:102] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:45.228427  385190 pod_ready.go:92] pod "coredns-76f75df574-kqgf5" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.228457  385190 pod_ready.go:81] duration metric: took 3.545740844s waiting for pod "coredns-76f75df574-kqgf5" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.228470  385190 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234714  385190 pod_ready.go:92] pod "coredns-76f75df574-nxnkf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.234747  385190 pod_ready.go:81] duration metric: took 6.268663ms waiting for pod "coredns-76f75df574-nxnkf" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.234767  385190 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240363  385190 pod_ready.go:92] pod "etcd-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.240386  385190 pod_ready.go:81] duration metric: took 5.606452ms waiting for pod "etcd-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.240397  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245748  385190 pod_ready.go:92] pod "kube-apiserver-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.245774  385190 pod_ready.go:81] duration metric: took 5.367922ms waiting for pod "kube-apiserver-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.245786  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251475  385190 pod_ready.go:92] pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:45.251498  385190 pod_ready.go:81] duration metric: took 5.703821ms waiting for pod "kube-controller-manager-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:45.251506  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050247  385190 pod_ready.go:92] pod "kube-proxy-2cf7h" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.050276  385190 pod_ready.go:81] duration metric: took 798.763018ms waiting for pod "kube-proxy-2cf7h" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.050285  385190 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448834  385190 pod_ready.go:92] pod "kube-scheduler-no-preload-222348" in "kube-system" namespace has status "Ready":"True"
	I1128 04:02:46.448860  385190 pod_ready.go:81] duration metric: took 398.568611ms waiting for pod "kube-scheduler-no-preload-222348" in "kube-system" namespace to be "Ready" ...
	I1128 04:02:46.448867  385190 pod_ready.go:38] duration metric: took 4.783773086s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:02:46.448903  385190 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:02:46.448956  385190 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:02:46.462941  385190 api_server.go:72] duration metric: took 5.074163925s to wait for apiserver process to appear ...
	I1128 04:02:46.463051  385190 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:02:46.463074  385190 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8443/healthz ...
	I1128 04:02:46.467657  385190 api_server.go:279] https://192.168.39.106:8443/healthz returned 200:
	ok
	I1128 04:02:46.468866  385190 api_server.go:141] control plane version: v1.29.0-rc.0
	I1128 04:02:46.468903  385190 api_server.go:131] duration metric: took 5.843376ms to wait for apiserver health ...
	I1128 04:02:46.468913  385190 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:02:46.655554  385190 system_pods.go:59] 9 kube-system pods found
	I1128 04:02:46.655587  385190 system_pods.go:61] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:46.655591  385190 system_pods.go:61] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:46.655595  385190 system_pods.go:61] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:46.655600  385190 system_pods.go:61] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:46.655605  385190 system_pods.go:61] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:46.655608  385190 system_pods.go:61] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:46.655612  385190 system_pods.go:61] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:46.655619  385190 system_pods.go:61] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:46.655623  385190 system_pods.go:61] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:46.655631  385190 system_pods.go:74] duration metric: took 186.709524ms to wait for pod list to return data ...
	I1128 04:02:46.655640  385190 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:02:46.849175  385190 default_sa.go:45] found service account: "default"
	I1128 04:02:46.849211  385190 default_sa.go:55] duration metric: took 193.561736ms for default service account to be created ...
	I1128 04:02:46.849224  385190 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:02:47.053165  385190 system_pods.go:86] 9 kube-system pods found
	I1128 04:02:47.053196  385190 system_pods.go:89] "coredns-76f75df574-kqgf5" [c63dad72-b046-4f33-b851-8ca60c237dd7] Running
	I1128 04:02:47.053202  385190 system_pods.go:89] "coredns-76f75df574-nxnkf" [13cd1a3c-a960-4908-adab-8928b59b37b1] Running
	I1128 04:02:47.053206  385190 system_pods.go:89] "etcd-no-preload-222348" [58880da0-6c30-47a7-947e-75827e60d115] Running
	I1128 04:02:47.053210  385190 system_pods.go:89] "kube-apiserver-no-preload-222348" [bd40b09e-e340-4fcf-96b7-1dde699e1527] Running
	I1128 04:02:47.053215  385190 system_pods.go:89] "kube-controller-manager-no-preload-222348" [77251ffe-6515-4cc8-bdc5-d3052afa1955] Running
	I1128 04:02:47.053219  385190 system_pods.go:89] "kube-proxy-2cf7h" [bcbbfab4-753c-4925-9154-27a19052567a] Running
	I1128 04:02:47.053223  385190 system_pods.go:89] "kube-scheduler-no-preload-222348" [69135509-152f-4146-a03f-f3ce7c83819b] Running
	I1128 04:02:47.053230  385190 system_pods.go:89] "metrics-server-57f55c9bc5-kl8k4" [de5f6e30-71af-4043-86de-11d878cc86c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:02:47.053234  385190 system_pods.go:89] "storage-provisioner" [37152287-4d4b-45db-a357-1468fc210bfc] Running
	I1128 04:02:47.053244  385190 system_pods.go:126] duration metric: took 204.014035ms to wait for k8s-apps to be running ...
	I1128 04:02:47.053258  385190 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:02:47.053305  385190 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:02:47.067411  385190 system_svc.go:56] duration metric: took 14.14274ms WaitForService to wait for kubelet.
	I1128 04:02:47.067436  385190 kubeadm.go:581] duration metric: took 5.678670521s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:02:47.067453  385190 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:02:47.249281  385190 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:02:47.249314  385190 node_conditions.go:123] node cpu capacity is 2
	I1128 04:02:47.249327  385190 node_conditions.go:105] duration metric: took 181.869484ms to run NodePressure ...
	I1128 04:02:47.249343  385190 start.go:228] waiting for startup goroutines ...
	I1128 04:02:47.249351  385190 start.go:233] waiting for cluster config update ...
	I1128 04:02:47.249363  385190 start.go:242] writing updated cluster config ...
	I1128 04:02:47.249683  385190 ssh_runner.go:195] Run: rm -f paused
	I1128 04:02:47.301859  385190 start.go:600] kubectl: 1.28.4, cluster: 1.29.0-rc.0 (minor skew: 1)
	I1128 04:02:47.304215  385190 out.go:177] * Done! kubectl is now configured to use "no-preload-222348" cluster and "default" namespace by default
	I1128 04:02:44.036258  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:46.535320  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:49.035723  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:51.036414  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.538606  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:53.501130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:02:56.038018  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:58.038148  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:02:56.573082  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:00.535454  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.536429  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:02.657139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:05.035677  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:07.535352  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:05.725166  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:10.035343  384793 pod_ready.go:102] pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:11.229133  384793 pod_ready.go:81] duration metric: took 4m0.000747713s waiting for pod "metrics-server-74d5856cc6-z4fsg" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:11.229186  384793 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 04:03:11.229223  384793 pod_ready.go:38] duration metric: took 4m1.198355321s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:11.229295  384793 kubeadm.go:640] restartCluster took 5m7.227749733s
	W1128 04:03:11.229381  384793 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 04:03:11.229418  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 04:03:11.809110  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:14.877214  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:17.718633  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (6.489183339s)
	I1128 04:03:17.718715  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:17.739229  384793 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:03:17.757193  384793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:03:17.767831  384793 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:03:17.767891  384793 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I1128 04:03:17.992007  384793 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:03:20.961191  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:24.033147  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:31.044187  384793 kubeadm.go:322] [init] Using Kubernetes version: v1.16.0
	I1128 04:03:31.044276  384793 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:03:31.044375  384793 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:03:31.044493  384793 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:03:31.044609  384793 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:03:31.044732  384793 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:03:31.044843  384793 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:03:31.044947  384793 kubeadm.go:322] [kubelet-start] Activating the kubelet service
	I1128 04:03:31.045000  384793 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:03:31.046699  384793 out.go:204]   - Generating certificates and keys ...
	I1128 04:03:31.046809  384793 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:03:31.046903  384793 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:03:31.047016  384793 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:03:31.047101  384793 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:03:31.047160  384793 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:03:31.047208  384793 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:03:31.047264  384793 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:03:31.047314  384793 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:03:31.047377  384793 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:03:31.047482  384793 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:03:31.047529  384793 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:03:31.047578  384793 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:03:31.047620  384793 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:03:31.047694  384793 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:03:31.047788  384793 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:03:31.047884  384793 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:03:31.047988  384793 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:03:31.049345  384793 out.go:204]   - Booting up control plane ...
	I1128 04:03:31.049473  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:03:31.049569  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:03:31.049662  384793 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:03:31.049788  384793 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:03:31.049994  384793 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:03:31.050107  384793 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.503287 seconds
	I1128 04:03:31.050234  384793 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:03:31.050420  384793 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:03:31.050527  384793 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:03:31.050654  384793 kubeadm.go:322] [mark-control-plane] Marking the node old-k8s-version-666657 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I1128 04:03:31.050713  384793 kubeadm.go:322] [bootstrap-token] Using token: gf7r1p.pbcguwte29lkqg9w
	I1128 04:03:31.052000  384793 out.go:204]   - Configuring RBAC rules ...
	I1128 04:03:31.052092  384793 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:03:31.052210  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:03:31.052320  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:03:31.052413  384793 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:03:31.052483  384793 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:03:31.052536  384793 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:03:31.052597  384793 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:03:31.052606  384793 kubeadm.go:322] 
	I1128 04:03:31.052674  384793 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:03:31.052686  384793 kubeadm.go:322] 
	I1128 04:03:31.052781  384793 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:03:31.052797  384793 kubeadm.go:322] 
	I1128 04:03:31.052818  384793 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:03:31.052928  384793 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:03:31.052973  384793 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:03:31.052982  384793 kubeadm.go:322] 
	I1128 04:03:31.053023  384793 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:03:31.053088  384793 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:03:31.053143  384793 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:03:31.053150  384793 kubeadm.go:322] 
	I1128 04:03:31.053220  384793 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities 
	I1128 04:03:31.053286  384793 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:03:31.053292  384793 kubeadm.go:322] 
	I1128 04:03:31.053381  384793 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053534  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:03:31.053573  384793 kubeadm.go:322]     --control-plane 	  
	I1128 04:03:31.053582  384793 kubeadm.go:322] 
	I1128 04:03:31.053693  384793 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:03:31.053705  384793 kubeadm.go:322] 
	I1128 04:03:31.053806  384793 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token gf7r1p.pbcguwte29lkqg9w \
	I1128 04:03:31.053946  384793 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:03:31.053966  384793 cni.go:84] Creating CNI manager for ""
	I1128 04:03:31.053976  384793 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:03:31.055505  384793 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:03:31.057142  384793 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:03:31.079411  384793 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:03:31.115893  384793 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:03:31.115971  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.115980  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=old-k8s-version-666657 minikube.k8s.io/updated_at=2023_11_28T04_03_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.155887  384793 ops.go:34] apiserver oom_adj: -16
	I1128 04:03:31.372659  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:31.491129  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.099198  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:32.598840  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.099309  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:33.599526  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:30.109176  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:33.181170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:34.099192  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:34.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.098837  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:35.599080  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.098595  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:36.599209  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.099078  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:37.599225  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.099115  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:38.599148  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.261149  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:39.099036  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:39.599363  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.099099  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:40.598700  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.099170  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:41.599370  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.099044  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.599281  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.098743  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:43.599233  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:42.333168  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:44.099079  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:44.598797  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.098959  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:45.598648  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.098995  384793 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.16.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:03:46.301569  384793 kubeadm.go:1081] duration metric: took 15.185662789s to wait for elevateKubeSystemPrivileges.
	I1128 04:03:46.301619  384793 kubeadm.go:406] StartCluster complete in 5m42.369662329s
	I1128 04:03:46.301646  384793 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.301755  384793 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:03:46.304463  384793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:03:46.304778  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:03:46.304778  384793 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:03:46.304867  384793 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304898  384793 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-666657"
	I1128 04:03:46.304910  384793 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-666657"
	I1128 04:03:46.304911  384793 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-666657"
	W1128 04:03:46.304920  384793 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:03:46.304927  384793 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-666657"
	W1128 04:03:46.304935  384793 addons.go:240] addon metrics-server should already be in state true
	I1128 04:03:46.304934  384793 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-666657"
	I1128 04:03:46.304987  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.304988  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.305001  384793 config.go:182] Loaded profile config "old-k8s-version-666657": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I1128 04:03:46.305394  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305427  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305454  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305429  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.305395  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.305694  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.322961  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33891
	I1128 04:03:46.322979  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34335
	I1128 04:03:46.323376  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323388  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.323820  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35617
	I1128 04:03:46.323904  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.323916  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324071  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324086  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.324273  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.324410  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324528  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.324590  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.324704  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.324711  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.325059  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.325278  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325304  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.325499  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.325519  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.328349  384793 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-666657"
	W1128 04:03:46.328365  384793 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:03:46.328393  384793 host.go:66] Checking if "old-k8s-version-666657" exists ...
	I1128 04:03:46.328731  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.328750  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.342280  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45973
	I1128 04:03:46.343025  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.343737  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.343759  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.344269  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.344492  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.345036  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39033
	I1128 04:03:46.345665  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.346273  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.346301  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.346384  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.348493  384793 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:03:46.346866  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.349948  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:03:46.349966  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:03:46.349989  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.350099  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.352330  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.352432  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36429
	I1128 04:03:46.354071  384793 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:03:46.352959  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.354459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355328  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.355358  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.355480  384793 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.355501  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:03:46.355518  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.355216  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.355803  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.356414  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.356435  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.356917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.357018  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.357108  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.357738  384793 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:03:46.357769  384793 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:03:46.358467  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.358922  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.358946  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.359072  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.359282  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.359403  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.359610  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.373628  384793 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38971
	I1128 04:03:46.374105  384793 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:03:46.374866  384793 main.go:141] libmachine: Using API Version  1
	I1128 04:03:46.374895  384793 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:03:46.375314  384793 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:03:46.375548  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetState
	I1128 04:03:46.377265  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .DriverName
	I1128 04:03:46.377561  384793 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.377582  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:03:46.377603  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHHostname
	I1128 04:03:46.380459  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.380834  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:c0:2c", ip: ""} in network mk-old-k8s-version-666657: {Iface:virbr2 ExpiryTime:2023-11-28 04:57:45 +0000 UTC Type:0 Mac:52:54:00:ec:c0:2c Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:old-k8s-version-666657 Clientid:01:52:54:00:ec:c0:2c}
	I1128 04:03:46.380864  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | domain old-k8s-version-666657 has defined IP address 192.168.50.7 and MAC address 52:54:00:ec:c0:2c in network mk-old-k8s-version-666657
	I1128 04:03:46.381016  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHPort
	I1128 04:03:46.381169  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHKeyPath
	I1128 04:03:46.381359  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .GetSSHUsername
	I1128 04:03:46.381466  384793 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/old-k8s-version-666657/id_rsa Username:docker}
	I1128 04:03:46.409792  384793 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-666657" context rescaled to 1 replicas
	I1128 04:03:46.409842  384793 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:03:46.411454  384793 out.go:177] * Verifying Kubernetes components...
	I1128 04:03:46.413194  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:03:46.586767  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:03:46.631269  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:03:46.634383  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:03:46.634407  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:03:46.666152  384793 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.666176  384793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:03:46.674225  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:03:46.674248  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:03:46.713431  384793 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:46.713461  384793 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:03:46.793657  384793 node_ready.go:49] node "old-k8s-version-666657" has status "Ready":"True"
	I1128 04:03:46.793685  384793 node_ready.go:38] duration metric: took 127.497314ms waiting for node "old-k8s-version-666657" to be "Ready" ...
	I1128 04:03:46.793695  384793 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:46.793699  384793 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:03:47.263395  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:47.404099  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404139  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404445  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404485  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.404487  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:47.404506  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.404519  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.404786  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.404809  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434537  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:47.434567  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:47.434929  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:47.434986  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:47.434965  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.447368  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.816042626s)
	I1128 04:03:48.447386  384793 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.781104735s)
	I1128 04:03:48.447415  384793 start.go:926] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I1128 04:03:48.447423  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.447803  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.447818  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.447828  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.447836  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.448143  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.448144  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.448166  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.746828  384793 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.953085214s)
	I1128 04:03:48.746898  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.746917  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747352  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.747378  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.747396  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.747420  384793 main.go:141] libmachine: Making call to close driver server
	I1128 04:03:48.747437  384793 main.go:141] libmachine: (old-k8s-version-666657) Calling .Close
	I1128 04:03:48.747692  384793 main.go:141] libmachine: (old-k8s-version-666657) DBG | Closing plugin on server side
	I1128 04:03:48.749007  384793 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:03:48.749027  384793 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:03:48.749045  384793 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-666657"
	I1128 04:03:48.750820  384793 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 04:03:48.417150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:48.752378  384793 addons.go:502] enable addons completed in 2.447603022s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 04:03:49.504435  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.973968  384793 pod_ready.go:102] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"False"
	I1128 04:03:51.485111  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:53.973462  384793 pod_ready.go:92] pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.973491  384793 pod_ready.go:81] duration metric: took 6.710064476s waiting for pod "coredns-5644d7b6d9-529cg" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.973504  384793 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.975383  384793 pod_ready.go:97] error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975413  384793 pod_ready.go:81] duration metric: took 1.901164ms waiting for pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace to be "Ready" ...
	E1128 04:03:53.975426  384793 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5644d7b6d9-bt86x" in "kube-system" namespace (skipping!): pods "coredns-5644d7b6d9-bt86x" not found
	I1128 04:03:53.975437  384793 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980213  384793 pod_ready.go:92] pod "kube-proxy-fpjnf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:03:53.980239  384793 pod_ready.go:81] duration metric: took 4.79365ms waiting for pod "kube-proxy-fpjnf" in "kube-system" namespace to be "Ready" ...
	I1128 04:03:53.980249  384793 pod_ready.go:38] duration metric: took 7.186544585s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:03:53.980270  384793 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:03:53.980322  384793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:03:53.995392  384793 api_server.go:72] duration metric: took 7.585507425s to wait for apiserver process to appear ...
	I1128 04:03:53.995438  384793 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:03:53.995455  384793 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I1128 04:03:54.002840  384793 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I1128 04:03:54.003953  384793 api_server.go:141] control plane version: v1.16.0
	I1128 04:03:54.003972  384793 api_server.go:131] duration metric: took 8.527968ms to wait for apiserver health ...
	I1128 04:03:54.003980  384793 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:03:54.008155  384793 system_pods.go:59] 4 kube-system pods found
	I1128 04:03:54.008179  384793 system_pods.go:61] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.008184  384793 system_pods.go:61] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.008192  384793 system_pods.go:61] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.008196  384793 system_pods.go:61] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.008202  384793 system_pods.go:74] duration metric: took 4.21636ms to wait for pod list to return data ...
	I1128 04:03:54.008209  384793 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:03:54.010577  384793 default_sa.go:45] found service account: "default"
	I1128 04:03:54.010597  384793 default_sa.go:55] duration metric: took 2.383201ms for default service account to be created ...
	I1128 04:03:54.010603  384793 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:03:54.014085  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.014107  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.014114  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.014121  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.014125  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.014142  384793 retry.go:31] will retry after 305.81254ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.325645  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.325690  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.325700  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.325711  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.325717  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.325737  384793 retry.go:31] will retry after 265.004483ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.596427  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.596465  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.596472  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.596483  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.596491  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.596515  384793 retry.go:31] will retry after 379.763313ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:54.981569  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:54.981599  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:54.981607  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:54.981617  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:54.981624  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:54.981646  384793 retry.go:31] will retry after 439.396023ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.426531  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.426560  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.426565  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.426572  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.426577  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.426593  384793 retry.go:31] will retry after 551.563469ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:55.983013  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:55.983042  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:55.983048  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:55.983055  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:55.983060  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:55.983076  384793 retry.go:31] will retry after 647.414701ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:56.635207  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:56.635238  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:56.635243  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:56.635251  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:56.635256  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:56.635276  384793 retry.go:31] will retry after 1.037316769s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.678748  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:57.678791  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:57.678800  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:57.678810  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:57.678815  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:57.678836  384793 retry.go:31] will retry after 1.167348672s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:03:57.565155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:03:58.851584  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:03:58.851615  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:03:58.851621  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:03:58.851627  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:03:58.851632  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:03:58.851649  384793 retry.go:31] will retry after 1.37796567s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.235244  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:00.235270  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:00.235276  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:00.235282  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:00.235288  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:00.235313  384793 retry.go:31] will retry after 2.090359712s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:02.330947  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:02.330984  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:02.331002  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:02.331013  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:02.331020  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:02.331041  384793 retry.go:31] will retry after 2.451255186s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:00.637193  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:04.787969  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:04.787999  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:04.788004  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:04.788011  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:04.788016  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:04.788033  384793 retry.go:31] will retry after 2.859833817s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:07.653629  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:07.653661  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:07.653667  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:07.653674  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:07.653679  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:07.653697  384793 retry.go:31] will retry after 4.226694897s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:06.721130  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:09.789162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:11.886456  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:11.886488  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:11.886496  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:11.886503  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:11.886508  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:11.886538  384793 retry.go:31] will retry after 4.177038986s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:16.069291  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:16.069324  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:16.069330  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:16.069336  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:16.069341  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:16.069359  384793 retry.go:31] will retry after 4.273733761s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:15.869195  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:18.945228  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:20.347960  384793 system_pods.go:86] 4 kube-system pods found
	I1128 04:04:20.347992  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:20.347998  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:20.348004  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:20.348009  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:20.348028  384793 retry.go:31] will retry after 6.790786839s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:27.147442  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:27.147481  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:27.147489  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:27.147496  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Pending
	I1128 04:04:27.147506  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:27.147513  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:27.147532  384793 retry.go:31] will retry after 7.530763623s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I1128 04:04:25.021154  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:28.093157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.177177  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:34.684745  384793 system_pods.go:86] 5 kube-system pods found
	I1128 04:04:34.684783  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:34.684792  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:34.684799  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:34.684807  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:34.684813  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:34.684835  384793 retry.go:31] will retry after 10.243202989s: missing components: etcd, kube-apiserver, kube-controller-manager
	I1128 04:04:37.245170  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:43.325131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:44.935423  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:04:44.935456  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:04:44.935462  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:04:44.935469  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Pending
	I1128 04:04:44.935474  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Pending
	I1128 04:04:44.935480  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:04:44.935486  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:04:44.935493  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:04:44.935498  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:04:44.935517  384793 retry.go:31] will retry after 15.895769684s: missing components: kube-apiserver, kube-controller-manager
	I1128 04:04:46.397235  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:52.481117  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:04:55.549226  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:00.839171  384793 system_pods.go:86] 8 kube-system pods found
	I1128 04:05:00.839203  384793 system_pods.go:89] "coredns-5644d7b6d9-529cg" [1c07d1ac-6461-451e-a1bf-4a5493d7d453] Running
	I1128 04:05:00.839209  384793 system_pods.go:89] "etcd-old-k8s-version-666657" [738449a4-70dd-4f66-9282-488a5518a415] Running
	I1128 04:05:00.839213  384793 system_pods.go:89] "kube-apiserver-old-k8s-version-666657" [6229a95c-ad3d-46c1-bd2e-61b0a1d67a4a] Running
	I1128 04:05:00.839217  384793 system_pods.go:89] "kube-controller-manager-old-k8s-version-666657" [7b900ce2-b484-4aba-b3ac-d6974b3fd961] Running
	I1128 04:05:00.839221  384793 system_pods.go:89] "kube-proxy-fpjnf" [62ef95f3-b9bc-4936-a2e7-398191b6bed5] Running
	I1128 04:05:00.839225  384793 system_pods.go:89] "kube-scheduler-old-k8s-version-666657" [baac3fe7-f343-4774-80bf-9ba3080c3f66] Running
	I1128 04:05:00.839231  384793 system_pods.go:89] "metrics-server-74d5856cc6-wlfq5" [64cff3b8-b297-425e-91bc-26e7ca091bfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:05:00.839236  384793 system_pods.go:89] "storage-provisioner" [ed59bc28-66f5-44f8-9ff5-d5be69e0049a] Running
	I1128 04:05:00.839245  384793 system_pods.go:126] duration metric: took 1m6.828635432s to wait for k8s-apps to be running ...
	I1128 04:05:00.839253  384793 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:05:00.839308  384793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:05:00.858602  384793 system_svc.go:56] duration metric: took 19.336447ms WaitForService to wait for kubelet.
	I1128 04:05:00.858640  384793 kubeadm.go:581] duration metric: took 1m14.448764188s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:05:00.858663  384793 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:05:00.862657  384793 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:05:00.862682  384793 node_conditions.go:123] node cpu capacity is 2
	I1128 04:05:00.862695  384793 node_conditions.go:105] duration metric: took 4.026622ms to run NodePressure ...
	I1128 04:05:00.862709  384793 start.go:228] waiting for startup goroutines ...
	I1128 04:05:00.862721  384793 start.go:233] waiting for cluster config update ...
	I1128 04:05:00.862736  384793 start.go:242] writing updated cluster config ...
	I1128 04:05:00.863037  384793 ssh_runner.go:195] Run: rm -f paused
	I1128 04:05:00.914674  384793 start.go:600] kubectl: 1.28.4, cluster: 1.16.0 (minor skew: 12)
	I1128 04:05:00.916795  384793 out.go:177] 
	W1128 04:05:00.918292  384793 out.go:239] ! /usr/local/bin/kubectl is version 1.28.4, which may have incompatibilities with Kubernetes 1.16.0.
	I1128 04:05:00.919711  384793 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I1128 04:05:00.921263  384793 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-666657" cluster and "default" namespace by default
	I1128 04:05:01.629125  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:04.701205  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:10.781216  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:13.853213  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:19.933127  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:23.005456  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:29.085157  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:32.161103  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:38.237107  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:41.313150  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:47.389244  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:50.461131  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:56.541162  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:05:59.613200  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:05.693144  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:08.765184  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:14.845161  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:17.921139  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:23.997190  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:27.069225  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:33.149188  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:36.221163  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:42.301167  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:45.373156  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:51.453155  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:54.525189  388252 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.208:22: connect: no route to host
	I1128 04:06:57.526358  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:06:57.526408  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:06:57.528448  388252 machine.go:91] provisioned docker machine in 4m37.381939051s
	I1128 04:06:57.528492  388252 fix.go:56] fixHost completed within 4m37.404595738s
	I1128 04:06:57.528498  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 4m37.404645524s
	W1128 04:06:57.528514  388252 start.go:691] error starting host: provision: host is not running
	W1128 04:06:57.528751  388252 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I1128 04:06:57.528762  388252 start.go:706] Will try again in 5 seconds ...
	I1128 04:07:02.528995  388252 start.go:365] acquiring machines lock for embed-certs-672176: {Name:mkf299bd5a49685b251bc5f55a52dc8c0facfc6f Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1128 04:07:02.529144  388252 start.go:369] acquired machines lock for "embed-certs-672176" in 79.815µs
	I1128 04:07:02.529172  388252 start.go:96] Skipping create...Using existing machine configuration
	I1128 04:07:02.529180  388252 fix.go:54] fixHost starting: 
	I1128 04:07:02.529654  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:07:02.529689  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:07:02.545443  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I1128 04:07:02.546041  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:07:02.546627  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:07:02.546657  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:07:02.547002  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:07:02.547202  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:02.547393  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:07:02.549209  388252 fix.go:102] recreateIfNeeded on embed-certs-672176: state=Stopped err=<nil>
	I1128 04:07:02.549234  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	W1128 04:07:02.549378  388252 fix.go:128] unexpected machine state, will restart: <nil>
	I1128 04:07:02.551250  388252 out.go:177] * Restarting existing kvm2 VM for "embed-certs-672176" ...
	I1128 04:07:02.552611  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Start
	I1128 04:07:02.552792  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring networks are active...
	I1128 04:07:02.553615  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network default is active
	I1128 04:07:02.553928  388252 main.go:141] libmachine: (embed-certs-672176) Ensuring network mk-embed-certs-672176 is active
	I1128 04:07:02.554371  388252 main.go:141] libmachine: (embed-certs-672176) Getting domain xml...
	I1128 04:07:02.555218  388252 main.go:141] libmachine: (embed-certs-672176) Creating domain...
	I1128 04:07:03.867073  388252 main.go:141] libmachine: (embed-certs-672176) Waiting to get IP...
	I1128 04:07:03.868115  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:03.868595  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:03.868706  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:03.868567  389161 retry.go:31] will retry after 306.367802ms: waiting for machine to come up
	I1128 04:07:04.176148  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.176727  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.176760  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.176665  389161 retry.go:31] will retry after 349.820346ms: waiting for machine to come up
	I1128 04:07:04.528319  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.528804  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.528830  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.528753  389161 retry.go:31] will retry after 434.816613ms: waiting for machine to come up
	I1128 04:07:04.965453  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:04.965931  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:04.965964  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:04.965859  389161 retry.go:31] will retry after 504.812349ms: waiting for machine to come up
	I1128 04:07:05.472644  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.473150  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.473181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.473089  389161 retry.go:31] will retry after 512.859795ms: waiting for machine to come up
	I1128 04:07:05.987622  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:05.988077  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:05.988101  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:05.988023  389161 retry.go:31] will retry after 578.673806ms: waiting for machine to come up
	I1128 04:07:06.568420  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:06.568923  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:06.568957  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:06.568863  389161 retry.go:31] will retry after 1.101477644s: waiting for machine to come up
	I1128 04:07:07.671698  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:07.672126  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:07.672156  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:07.672054  389161 retry.go:31] will retry after 1.379684082s: waiting for machine to come up
	I1128 04:07:09.053227  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:09.053918  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:09.053950  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:09.053851  389161 retry.go:31] will retry after 1.775284772s: waiting for machine to come up
	I1128 04:07:10.831571  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:10.832140  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:10.832177  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:10.832065  389161 retry.go:31] will retry after 2.005203426s: waiting for machine to come up
	I1128 04:07:12.838667  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:12.839159  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:12.839187  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:12.839113  389161 retry.go:31] will retry after 2.403192486s: waiting for machine to come up
	I1128 04:07:15.244005  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:15.244513  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:15.244553  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:15.244427  389161 retry.go:31] will retry after 2.329820043s: waiting for machine to come up
	I1128 04:07:17.576268  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:17.576707  388252 main.go:141] libmachine: (embed-certs-672176) DBG | unable to find current IP address of domain embed-certs-672176 in network mk-embed-certs-672176
	I1128 04:07:17.576748  388252 main.go:141] libmachine: (embed-certs-672176) DBG | I1128 04:07:17.576652  389161 retry.go:31] will retry after 4.220303586s: waiting for machine to come up
	I1128 04:07:21.801976  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802441  388252 main.go:141] libmachine: (embed-certs-672176) Found IP for machine: 192.168.72.208
	I1128 04:07:21.802469  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has current primary IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.802483  388252 main.go:141] libmachine: (embed-certs-672176) Reserving static IP address...
	I1128 04:07:21.802890  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.802920  388252 main.go:141] libmachine: (embed-certs-672176) DBG | skip adding static IP to network mk-embed-certs-672176 - found existing host DHCP lease matching {name: "embed-certs-672176", mac: "52:54:00:14:33:cc", ip: "192.168.72.208"}
	I1128 04:07:21.802939  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Getting to WaitForSSH function...
	I1128 04:07:21.802955  388252 main.go:141] libmachine: (embed-certs-672176) Reserved static IP address: 192.168.72.208
	I1128 04:07:21.802967  388252 main.go:141] libmachine: (embed-certs-672176) Waiting for SSH to be available...
	I1128 04:07:21.805675  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806052  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.806086  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.806212  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH client type: external
	I1128 04:07:21.806237  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Using SSH private key: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa (-rw-------)
	I1128 04:07:21.806261  388252 main.go:141] libmachine: (embed-certs-672176) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.208 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1128 04:07:21.806272  388252 main.go:141] libmachine: (embed-certs-672176) DBG | About to run SSH command:
	I1128 04:07:21.806284  388252 main.go:141] libmachine: (embed-certs-672176) DBG | exit 0
	I1128 04:07:21.897047  388252 main.go:141] libmachine: (embed-certs-672176) DBG | SSH cmd err, output: <nil>: 
	I1128 04:07:21.897443  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetConfigRaw
	I1128 04:07:21.898164  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:21.901014  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901421  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.901454  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.901679  388252 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/config.json ...
	I1128 04:07:21.901872  388252 machine.go:88] provisioning docker machine ...
	I1128 04:07:21.901891  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:21.902121  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902304  388252 buildroot.go:166] provisioning hostname "embed-certs-672176"
	I1128 04:07:21.902318  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:21.902482  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:21.905282  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905757  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:21.905798  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:21.905977  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:21.906187  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906383  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:21.906565  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:21.906734  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:21.907224  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:21.907254  388252 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-672176 && echo "embed-certs-672176" | sudo tee /etc/hostname
	I1128 04:07:22.042525  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-672176
	
	I1128 04:07:22.042553  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.045516  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.045916  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.045961  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.046143  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.046353  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046526  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.046676  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.046861  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.047186  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.047207  388252 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-672176' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-672176/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-672176' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1128 04:07:22.179515  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1128 04:07:22.179552  388252 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17671-333305/.minikube CaCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17671-333305/.minikube}
	I1128 04:07:22.179578  388252 buildroot.go:174] setting up certificates
	I1128 04:07:22.179591  388252 provision.go:83] configureAuth start
	I1128 04:07:22.179602  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetMachineName
	I1128 04:07:22.179940  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:22.182782  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183167  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.183199  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.183344  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.185770  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186158  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.186195  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.186348  388252 provision.go:138] copyHostCerts
	I1128 04:07:22.186407  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem, removing ...
	I1128 04:07:22.186418  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem
	I1128 04:07:22.186494  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/ca.pem (1078 bytes)
	I1128 04:07:22.186609  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem, removing ...
	I1128 04:07:22.186623  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem
	I1128 04:07:22.186658  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/cert.pem (1123 bytes)
	I1128 04:07:22.186756  388252 exec_runner.go:144] found /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem, removing ...
	I1128 04:07:22.186772  388252 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem
	I1128 04:07:22.186830  388252 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17671-333305/.minikube/key.pem (1675 bytes)
	I1128 04:07:22.186915  388252 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem org=jenkins.embed-certs-672176 san=[192.168.72.208 192.168.72.208 localhost 127.0.0.1 minikube embed-certs-672176]
	I1128 04:07:22.268178  388252 provision.go:172] copyRemoteCerts
	I1128 04:07:22.268250  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1128 04:07:22.268305  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.270816  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271152  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.271181  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.271382  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.271571  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.271730  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.271880  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.362340  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1128 04:07:22.387591  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1128 04:07:22.412169  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1128 04:07:22.437185  388252 provision.go:86] duration metric: configureAuth took 257.574597ms
	I1128 04:07:22.437223  388252 buildroot.go:189] setting minikube options for container-runtime
	I1128 04:07:22.437418  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:07:22.437496  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.440503  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.440937  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.440984  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.441148  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.441414  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441626  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.441808  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.442043  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.442369  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.442386  388252 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1128 04:07:22.778314  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1128 04:07:22.778344  388252 machine.go:91] provisioned docker machine in 876.457785ms
	I1128 04:07:22.778392  388252 start.go:300] post-start starting for "embed-certs-672176" (driver="kvm2")
	I1128 04:07:22.778413  388252 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1128 04:07:22.778463  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:22.778894  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1128 04:07:22.778934  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.781750  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782161  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.782203  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.782336  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.782653  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.782870  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.783045  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:22.876530  388252 ssh_runner.go:195] Run: cat /etc/os-release
	I1128 04:07:22.881442  388252 info.go:137] Remote host: Buildroot 2021.02.12
	I1128 04:07:22.881472  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/addons for local assets ...
	I1128 04:07:22.881541  388252 filesync.go:126] Scanning /home/jenkins/minikube-integration/17671-333305/.minikube/files for local assets ...
	I1128 04:07:22.881618  388252 filesync.go:149] local asset: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem -> 3405152.pem in /etc/ssl/certs
	I1128 04:07:22.881701  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1128 04:07:22.891393  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:22.914734  388252 start.go:303] post-start completed in 136.316733ms
	I1128 04:07:22.914771  388252 fix.go:56] fixHost completed within 20.385588986s
	I1128 04:07:22.914800  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:22.917856  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918267  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:22.918301  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:22.918449  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:22.918697  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.918898  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:22.919051  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:22.919230  388252 main.go:141] libmachine: Using SSH client type: native
	I1128 04:07:22.919548  388252 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x809320] 0x80c000 <nil>  [] 0s} 192.168.72.208 22 <nil> <nil>}
	I1128 04:07:22.919561  388252 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I1128 04:07:23.037790  388252 main.go:141] libmachine: SSH cmd err, output: <nil>: 1701144442.982632661
	
	I1128 04:07:23.037817  388252 fix.go:206] guest clock: 1701144442.982632661
	I1128 04:07:23.037828  388252 fix.go:219] Guest: 2023-11-28 04:07:22.982632661 +0000 UTC Remote: 2023-11-28 04:07:22.914776935 +0000 UTC m=+302.972189005 (delta=67.855726ms)
	I1128 04:07:23.037853  388252 fix.go:190] guest clock delta is within tolerance: 67.855726ms
	I1128 04:07:23.037860  388252 start.go:83] releasing machines lock for "embed-certs-672176", held for 20.508701455s
	I1128 04:07:23.037879  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.038196  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:23.040928  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041276  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.041309  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.041473  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042009  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042217  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:07:23.042315  388252 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1128 04:07:23.042380  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.042447  388252 ssh_runner.go:195] Run: cat /version.json
	I1128 04:07:23.042479  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:07:23.045070  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045430  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.045459  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045478  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.045634  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.045826  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.045987  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.045998  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:23.046020  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:23.046131  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.046197  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:07:23.046338  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:07:23.046455  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:07:23.046594  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:07:23.158653  388252 ssh_runner.go:195] Run: systemctl --version
	I1128 04:07:23.164496  388252 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1128 04:07:23.313946  388252 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1128 04:07:23.320220  388252 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1128 04:07:23.320326  388252 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1128 04:07:23.339262  388252 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1128 04:07:23.339296  388252 start.go:472] detecting cgroup driver to use...
	I1128 04:07:23.339401  388252 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1128 04:07:23.352989  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1128 04:07:23.367735  388252 docker.go:203] disabling cri-docker service (if available) ...
	I1128 04:07:23.367797  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1128 04:07:23.382143  388252 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1128 04:07:23.395983  388252 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1128 04:07:23.513475  388252 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1128 04:07:23.657449  388252 docker.go:219] disabling docker service ...
	I1128 04:07:23.657531  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1128 04:07:23.672662  388252 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1128 04:07:23.685142  388252 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1128 04:07:23.810404  388252 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1128 04:07:23.929413  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1128 04:07:23.942971  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1128 04:07:23.961419  388252 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1128 04:07:23.961493  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.971562  388252 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1128 04:07:23.971643  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.981660  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:23.992472  388252 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1128 04:07:24.002748  388252 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1128 04:07:24.016234  388252 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1128 04:07:24.025560  388252 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1128 04:07:24.025629  388252 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1128 04:07:24.039085  388252 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1128 04:07:24.048324  388252 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1128 04:07:24.160507  388252 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1128 04:07:24.331205  388252 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1128 04:07:24.331292  388252 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1128 04:07:24.336480  388252 start.go:540] Will wait 60s for crictl version
	I1128 04:07:24.336541  388252 ssh_runner.go:195] Run: which crictl
	I1128 04:07:24.341052  388252 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1128 04:07:24.376784  388252 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I1128 04:07:24.376910  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.425035  388252 ssh_runner.go:195] Run: crio --version
	I1128 04:07:24.485230  388252 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I1128 04:07:24.486822  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetIP
	I1128 04:07:24.490127  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490529  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:07:24.490558  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:07:24.490733  388252 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1128 04:07:24.494881  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:24.510006  388252 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 04:07:24.510097  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:24.549615  388252 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I1128 04:07:24.549699  388252 ssh_runner.go:195] Run: which lz4
	I1128 04:07:24.554039  388252 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I1128 04:07:24.558068  388252 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1128 04:07:24.558101  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I1128 04:07:26.358503  388252 crio.go:444] Took 1.804493 seconds to copy over tarball
	I1128 04:07:26.358586  388252 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I1128 04:07:29.679041  388252 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.320417818s)
	I1128 04:07:29.679072  388252 crio.go:451] Took 3.320535 seconds to extract the tarball
	I1128 04:07:29.679086  388252 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1128 04:07:29.723905  388252 ssh_runner.go:195] Run: sudo crictl images --output json
	I1128 04:07:29.774544  388252 crio.go:496] all images are preloaded for cri-o runtime.
	I1128 04:07:29.774574  388252 cache_images.go:84] Images are preloaded, skipping loading
	I1128 04:07:29.774683  388252 ssh_runner.go:195] Run: crio config
	I1128 04:07:29.841740  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:29.841767  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:29.841792  388252 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1128 04:07:29.841826  388252 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.208 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-672176 NodeName:embed-certs-672176 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.208"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.208 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1128 04:07:29.842004  388252 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.208
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-672176"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.208
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.208"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1128 04:07:29.842115  388252 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-672176 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.208
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1128 04:07:29.842184  388252 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I1128 04:07:29.854017  388252 binaries.go:44] Found k8s binaries, skipping transfer
	I1128 04:07:29.854103  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1128 04:07:29.863871  388252 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1128 04:07:29.880656  388252 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1128 04:07:29.899138  388252 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I1128 04:07:29.919697  388252 ssh_runner.go:195] Run: grep 192.168.72.208	control-plane.minikube.internal$ /etc/hosts
	I1128 04:07:29.924087  388252 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.208	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1128 04:07:29.936814  388252 certs.go:56] Setting up /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176 for IP: 192.168.72.208
	I1128 04:07:29.936851  388252 certs.go:190] acquiring lock for shared ca certs: {Name:mk57c0483467fb0022a439f1b546194ca653d1ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:07:29.937053  388252 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key
	I1128 04:07:29.937097  388252 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key
	I1128 04:07:29.937198  388252 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/client.key
	I1128 04:07:29.937274  388252 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key.9e96c9f0
	I1128 04:07:29.937334  388252 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key
	I1128 04:07:29.937491  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem (1338 bytes)
	W1128 04:07:29.937524  388252 certs.go:433] ignoring /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515_empty.pem, impossibly tiny 0 bytes
	I1128 04:07:29.937535  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca-key.pem (1679 bytes)
	I1128 04:07:29.937561  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/ca.pem (1078 bytes)
	I1128 04:07:29.937586  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/cert.pem (1123 bytes)
	I1128 04:07:29.937607  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/certs/home/jenkins/minikube-integration/17671-333305/.minikube/certs/key.pem (1675 bytes)
	I1128 04:07:29.937698  388252 certs.go:437] found cert: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem (1708 bytes)
	I1128 04:07:29.938553  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1128 04:07:29.963444  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1128 04:07:29.988035  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1128 04:07:30.012981  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/embed-certs-672176/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1128 04:07:30.219926  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1128 04:07:30.244077  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1128 04:07:30.268833  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1128 04:07:30.293921  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1128 04:07:30.322839  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/ssl/certs/3405152.pem --> /usr/share/ca-certificates/3405152.pem (1708 bytes)
	I1128 04:07:30.349783  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1128 04:07:30.374569  388252 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17671-333305/.minikube/certs/340515.pem --> /usr/share/ca-certificates/340515.pem (1338 bytes)
	I1128 04:07:30.401804  388252 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1128 04:07:30.420925  388252 ssh_runner.go:195] Run: openssl version
	I1128 04:07:30.427193  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3405152.pem && ln -fs /usr/share/ca-certificates/3405152.pem /etc/ssl/certs/3405152.pem"
	I1128 04:07:30.439369  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444359  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Nov 28 02:50 /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.444455  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3405152.pem
	I1128 04:07:30.451032  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3405152.pem /etc/ssl/certs/3ec20f2e.0"
	I1128 04:07:30.464110  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1128 04:07:30.477275  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483239  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Nov 28 02:41 /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.483314  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1128 04:07:30.489884  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1128 04:07:30.501967  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/340515.pem && ln -fs /usr/share/ca-certificates/340515.pem /etc/ssl/certs/340515.pem"
	I1128 04:07:30.514081  388252 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519079  388252 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Nov 28 02:50 /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.519157  388252 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/340515.pem
	I1128 04:07:30.525194  388252 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/340515.pem /etc/ssl/certs/51391683.0"
	I1128 04:07:30.536594  388252 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1128 04:07:30.541041  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1128 04:07:30.547008  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1128 04:07:30.554317  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1128 04:07:30.561063  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1128 04:07:30.567355  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1128 04:07:30.573719  388252 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1128 04:07:30.580010  388252 kubeadm.go:404] StartCluster: {Name:embed-certs-672176 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-672176 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 04:07:30.580166  388252 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1128 04:07:30.580237  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:30.623908  388252 cri.go:89] found id: ""
	I1128 04:07:30.623980  388252 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1128 04:07:30.635847  388252 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I1128 04:07:30.635911  388252 kubeadm.go:636] restartCluster start
	I1128 04:07:30.635982  388252 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1128 04:07:30.646523  388252 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.647648  388252 kubeconfig.go:92] found "embed-certs-672176" server: "https://192.168.72.208:8443"
	I1128 04:07:30.650037  388252 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1128 04:07:30.660625  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.660703  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.674234  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:30.674258  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:30.674309  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:30.687276  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.188012  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.188122  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.201481  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:31.688057  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:31.688152  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:31.701564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.188188  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.201049  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:32.688113  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:32.688191  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:32.700824  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.187399  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.187517  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.200128  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:33.687562  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:33.687688  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:33.700564  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.188276  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.188406  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.201686  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:34.688327  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:34.688426  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:34.701023  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.187672  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.187809  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.200598  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:35.688485  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:35.688565  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:35.701518  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.188131  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.188213  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.201708  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:36.688321  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:36.688430  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:36.701852  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.187395  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.187539  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.200267  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:37.688365  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:37.688447  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:37.701921  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.187456  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.187615  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.201388  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:38.687819  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:38.687933  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:38.700584  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.188195  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.188302  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.201557  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:39.688192  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:39.688268  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:39.700990  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.187806  388252 api_server.go:166] Checking apiserver status ...
	I1128 04:07:40.187918  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1128 04:07:40.201110  388252 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1128 04:07:40.660853  388252 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I1128 04:07:40.660908  388252 kubeadm.go:1128] stopping kube-system containers ...
	I1128 04:07:40.660926  388252 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1128 04:07:40.661008  388252 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1128 04:07:40.706945  388252 cri.go:89] found id: ""
	I1128 04:07:40.707017  388252 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1128 04:07:40.724988  388252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:07:40.735077  388252 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:07:40.735165  388252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745110  388252 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1128 04:07:40.745146  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:40.870777  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:41.851187  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.047008  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.129329  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:42.194986  388252 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:07:42.195074  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.210225  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:42.727622  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.227063  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:43.726928  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.227709  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.727790  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:07:44.756952  388252 api_server.go:72] duration metric: took 2.561964065s to wait for apiserver process to appear ...
	I1128 04:07:44.756989  388252 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:07:44.757011  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.757778  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:44.757838  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:44.758268  388252 api_server.go:269] stopped: https://192.168.72.208:8443/healthz: Get "https://192.168.72.208:8443/healthz": dial tcp 192.168.72.208:8443: connect: connection refused
	I1128 04:07:45.258785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.416741  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.416771  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.416785  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.484252  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1128 04:07:49.484292  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1128 04:07:49.758607  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:49.765159  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:49.765189  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.258770  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.264464  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.264499  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:50.759164  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:50.765206  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1128 04:07:50.765246  388252 api_server.go:103] status: https://192.168.72.208:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1128 04:07:51.258591  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:07:51.264758  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I1128 04:07:51.274077  388252 api_server.go:141] control plane version: v1.28.4
	I1128 04:07:51.274110  388252 api_server.go:131] duration metric: took 6.517112692s to wait for apiserver health ...
	I1128 04:07:51.274122  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:07:51.274130  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:07:51.276088  388252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:07:51.277582  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:07:51.302050  388252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:07:51.355400  388252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:07:51.371543  388252 system_pods.go:59] 8 kube-system pods found
	I1128 04:07:51.371592  388252 system_pods.go:61] "coredns-5dd5756b68-296l9" [a79e060e-b757-46b9-882e-5f065aed0f46] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1128 04:07:51.371605  388252 system_pods.go:61] "etcd-embed-certs-672176" [610938df-5b75-4fef-b632-19af73d74dab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1128 04:07:51.371623  388252 system_pods.go:61] "kube-apiserver-embed-certs-672176" [3e513b84-29f4-4285-aea3-963078fa9e74] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1128 04:07:51.371633  388252 system_pods.go:61] "kube-controller-manager-embed-certs-672176" [6fb9a912-0c05-47d1-8420-26d0bbbe92c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1128 04:07:51.371640  388252 system_pods.go:61] "kube-proxy-4cvwh" [9882c0aa-5c66-4b53-8c8e-827c1cddaac5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1128 04:07:51.371652  388252 system_pods.go:61] "kube-scheduler-embed-certs-672176" [2d7c706d-f01b-4e80-ba35-8ef97f27faa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1128 04:07:51.371659  388252 system_pods.go:61] "metrics-server-57f55c9bc5-sbkpc" [ea558db5-2aab-4e1e-aa62-a4595172d108] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:07:51.371666  388252 system_pods.go:61] "storage-provisioner" [96737dd7-931e-4ac5-b662-c560a4b6642e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1128 04:07:51.371676  388252 system_pods.go:74] duration metric: took 16.247766ms to wait for pod list to return data ...
	I1128 04:07:51.371694  388252 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:07:51.376458  388252 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:07:51.376495  388252 node_conditions.go:123] node cpu capacity is 2
	I1128 04:07:51.376508  388252 node_conditions.go:105] duration metric: took 4.80925ms to run NodePressure ...
	I1128 04:07:51.376539  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1128 04:07:51.778110  388252 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I1128 04:07:51.786916  388252 kubeadm.go:787] kubelet initialised
	I1128 04:07:51.787002  388252 kubeadm.go:788] duration metric: took 8.859672ms waiting for restarted kubelet to initialise ...
	I1128 04:07:51.787019  388252 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:07:51.799380  388252 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.807214  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807261  388252 pod_ready.go:81] duration metric: took 7.829357ms waiting for pod "coredns-5dd5756b68-296l9" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.807274  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "coredns-5dd5756b68-296l9" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.807299  388252 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.814516  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814550  388252 pod_ready.go:81] duration metric: took 7.235029ms waiting for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.814569  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "etcd-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.814576  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:51.827729  388252 pod_ready.go:97] node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827759  388252 pod_ready.go:81] duration metric: took 13.172422ms waiting for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	E1128 04:07:51.827768  388252 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-672176" hosting pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-672176" has status "Ready":"False"
	I1128 04:07:51.827774  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:54.190842  388252 pod_ready.go:102] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:07:56.189656  388252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.189758  388252 pod_ready.go:81] duration metric: took 4.36196703s waiting for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.189779  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196462  388252 pod_ready.go:92] pod "kube-proxy-4cvwh" in "kube-system" namespace has status "Ready":"True"
	I1128 04:07:56.196503  388252 pod_ready.go:81] duration metric: took 6.707028ms waiting for pod "kube-proxy-4cvwh" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:56.196517  388252 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:07:58.590819  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:00.590953  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:02.595296  388252 pod_ready.go:102] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:04.592801  388252 pod_ready.go:92] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:08:04.592826  388252 pod_ready.go:81] duration metric: took 8.396301174s waiting for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:04.592839  388252 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace to be "Ready" ...
	I1128 04:08:06.618794  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:08.619204  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:11.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:13.618160  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:15.619404  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:17.620107  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:20.118789  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:22.119626  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:24.619088  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:26.619353  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:29.118548  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:31.118625  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:33.122964  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:35.620077  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:38.118800  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:40.618996  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:42.619252  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:45.118801  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:47.118987  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:49.619233  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:52.118338  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:54.120044  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:56.619768  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:08:59.119321  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:01.119784  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:03.619289  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:06.119695  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:08.618767  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:10.620952  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:13.119086  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:15.121912  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:17.618200  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:19.619428  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:22.117316  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:24.118147  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:26.119945  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:28.619687  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:30.619772  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:33.118414  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:35.622173  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:38.118091  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:40.118723  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:42.119551  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:44.119931  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:46.619572  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:48.620898  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:51.118343  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:53.619215  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:56.119440  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:09:58.620299  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:01.118313  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:03.618615  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:05.619056  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:07.622475  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:10.117858  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:12.119468  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:14.619203  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:16.619540  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:19.118749  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:21.619618  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:23.620623  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:26.118183  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:28.118246  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:30.618282  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:33.117841  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:35.122904  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:37.619116  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:40.118304  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:42.618264  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:44.621653  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:47.119733  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:49.618284  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:51.619099  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:54.118728  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:56.121041  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:10:58.618237  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:00.619430  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:03.119263  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:05.619558  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:07.620571  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:10.117924  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:12.118001  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:14.119916  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:16.618621  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:18.620149  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:21.118296  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:23.118614  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:25.119100  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:27.120549  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:29.618264  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:32.119075  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:34.619939  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:37.119561  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:39.119896  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:41.617842  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:43.618594  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:45.618757  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:47.619342  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:49.623012  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:52.119438  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:54.121760  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:56.620252  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:11:59.120191  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:01.618305  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:03.619616  388252 pod_ready.go:102] pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:04.593067  388252 pod_ready.go:81] duration metric: took 4m0.000190987s waiting for pod "metrics-server-57f55c9bc5-sbkpc" in "kube-system" namespace to be "Ready" ...
	E1128 04:12:04.593121  388252 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I1128 04:12:04.593139  388252 pod_ready.go:38] duration metric: took 4m12.806107308s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:04.593168  388252 kubeadm.go:640] restartCluster took 4m33.957247441s
	W1128 04:12:04.593251  388252 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I1128 04:12:04.593282  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1128 04:12:18.614553  388252 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.021224516s)
	I1128 04:12:18.614653  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:18.628836  388252 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1128 04:12:18.640242  388252 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1128 04:12:18.649879  388252 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1128 04:12:18.649930  388252 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1128 04:12:18.702438  388252 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I1128 04:12:18.702606  388252 kubeadm.go:322] [preflight] Running pre-flight checks
	I1128 04:12:18.867279  388252 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1128 04:12:18.867400  388252 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1128 04:12:18.867534  388252 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1128 04:12:19.120397  388252 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1128 04:12:19.122246  388252 out.go:204]   - Generating certificates and keys ...
	I1128 04:12:19.122357  388252 kubeadm.go:322] [certs] Using existing ca certificate authority
	I1128 04:12:19.122474  388252 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I1128 04:12:19.122646  388252 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1128 04:12:19.122757  388252 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I1128 04:12:19.122856  388252 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I1128 04:12:19.122934  388252 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I1128 04:12:19.123028  388252 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I1128 04:12:19.123173  388252 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I1128 04:12:19.123270  388252 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1128 04:12:19.123380  388252 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1128 04:12:19.123435  388252 kubeadm.go:322] [certs] Using the existing "sa" key
	I1128 04:12:19.123517  388252 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1128 04:12:19.397687  388252 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1128 04:12:19.545433  388252 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1128 04:12:19.753655  388252 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1128 04:12:19.867889  388252 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1128 04:12:19.868510  388252 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1128 04:12:19.873288  388252 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1128 04:12:19.875099  388252 out.go:204]   - Booting up control plane ...
	I1128 04:12:19.875243  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1128 04:12:19.875362  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1128 04:12:19.875447  388252 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1128 04:12:19.890902  388252 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1128 04:12:19.891790  388252 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1128 04:12:19.891903  388252 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I1128 04:12:20.033327  388252 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1128 04:12:28.539450  388252 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.505311 seconds
	I1128 04:12:28.539554  388252 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1128 04:12:28.556290  388252 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1128 04:12:29.115246  388252 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I1128 04:12:29.115517  388252 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-672176 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1128 04:12:29.632584  388252 kubeadm.go:322] [bootstrap-token] Using token: fhdku8.6c57fpjso9w7rrxv
	I1128 04:12:29.634185  388252 out.go:204]   - Configuring RBAC rules ...
	I1128 04:12:29.634320  388252 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1128 04:12:29.640994  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1128 04:12:29.653566  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1128 04:12:29.660519  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1128 04:12:29.665018  388252 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1128 04:12:29.677514  388252 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1128 04:12:29.691421  388252 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1128 04:12:29.939496  388252 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I1128 04:12:30.049393  388252 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I1128 04:12:30.049425  388252 kubeadm.go:322] 
	I1128 04:12:30.049538  388252 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I1128 04:12:30.049559  388252 kubeadm.go:322] 
	I1128 04:12:30.049652  388252 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I1128 04:12:30.049683  388252 kubeadm.go:322] 
	I1128 04:12:30.049721  388252 kubeadm.go:322]   mkdir -p $HOME/.kube
	I1128 04:12:30.049806  388252 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1128 04:12:30.049876  388252 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1128 04:12:30.049884  388252 kubeadm.go:322] 
	I1128 04:12:30.049983  388252 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I1128 04:12:30.050004  388252 kubeadm.go:322] 
	I1128 04:12:30.050076  388252 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1128 04:12:30.050088  388252 kubeadm.go:322] 
	I1128 04:12:30.050145  388252 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I1128 04:12:30.050234  388252 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1128 04:12:30.050337  388252 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1128 04:12:30.050347  388252 kubeadm.go:322] 
	I1128 04:12:30.050444  388252 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I1128 04:12:30.050532  388252 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I1128 04:12:30.050539  388252 kubeadm.go:322] 
	I1128 04:12:30.050633  388252 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token fhdku8.6c57fpjso9w7rrxv \
	I1128 04:12:30.050753  388252 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 \
	I1128 04:12:30.050784  388252 kubeadm.go:322] 	--control-plane 
	I1128 04:12:30.050790  388252 kubeadm.go:322] 
	I1128 04:12:30.050888  388252 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I1128 04:12:30.050898  388252 kubeadm.go:322] 
	I1128 04:12:30.050994  388252 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token fhdku8.6c57fpjso9w7rrxv \
	I1128 04:12:30.051118  388252 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:59b980ddf96a3e12c59e69cfb6e934240bd8cfc8b1fa58612892ff6b047a2745 
	I1128 04:12:30.051556  388252 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1128 04:12:30.051597  388252 cni.go:84] Creating CNI manager for ""
	I1128 04:12:30.051611  388252 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 04:12:30.053491  388252 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1128 04:12:30.055147  388252 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1128 04:12:30.088905  388252 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I1128 04:12:30.132297  388252 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1128 04:12:30.132365  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9 minikube.k8s.io/name=embed-certs-672176 minikube.k8s.io/updated_at=2023_11_28T04_12_30_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.132370  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.459401  388252 ops.go:34] apiserver oom_adj: -16
	I1128 04:12:30.459555  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:30.568049  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:31.166991  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:31.666953  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:32.167174  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:32.666615  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:33.166464  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:33.667438  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:34.167422  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:34.666474  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:35.167309  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:35.667310  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:36.166896  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:36.667030  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:37.167265  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:37.667172  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:38.166893  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:38.667196  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:39.166889  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:39.667205  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:40.167112  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:40.667377  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:41.167422  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:41.666650  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:42.167425  388252 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1128 04:12:42.308007  388252 kubeadm.go:1081] duration metric: took 12.175710221s to wait for elevateKubeSystemPrivileges.
	I1128 04:12:42.308051  388252 kubeadm.go:406] StartCluster complete in 5m11.728054603s
	I1128 04:12:42.308070  388252 settings.go:142] acquiring lock: {Name:mkfb2d7093b322fda2d9cc2312f5f3624ab7d089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:12:42.308149  388252 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 04:12:42.310104  388252 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17671-333305/kubeconfig: {Name:mkce00712cda810f42537a2620766baea0a598c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1128 04:12:42.310352  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1128 04:12:42.310440  388252 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false]
	I1128 04:12:42.310557  388252 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-672176"
	I1128 04:12:42.310581  388252 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-672176"
	W1128 04:12:42.310588  388252 addons.go:240] addon storage-provisioner should already be in state true
	I1128 04:12:42.310601  388252 config.go:182] Loaded profile config "embed-certs-672176": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 04:12:42.310668  388252 addons.go:69] Setting default-storageclass=true in profile "embed-certs-672176"
	I1128 04:12:42.310684  388252 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-672176"
	I1128 04:12:42.310698  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.311002  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311040  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.311081  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311113  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.311110  388252 addons.go:69] Setting metrics-server=true in profile "embed-certs-672176"
	I1128 04:12:42.311127  388252 addons.go:231] Setting addon metrics-server=true in "embed-certs-672176"
	W1128 04:12:42.311134  388252 addons.go:240] addon metrics-server should already be in state true
	I1128 04:12:42.311167  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.311539  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.311584  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.328327  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46827
	I1128 04:12:42.328769  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.329061  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35335
	I1128 04:12:42.329541  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.329720  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.329731  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.329740  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1128 04:12:42.330179  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.330195  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.330193  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.330557  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.330572  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.330768  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.331035  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.331050  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.331073  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.331151  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.331476  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.332248  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.332359  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.334824  388252 addons.go:231] Setting addon default-storageclass=true in "embed-certs-672176"
	W1128 04:12:42.334849  388252 addons.go:240] addon default-storageclass should already be in state true
	I1128 04:12:42.334882  388252 host.go:66] Checking if "embed-certs-672176" exists ...
	I1128 04:12:42.335253  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.335333  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.352633  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40133
	I1128 04:12:42.353356  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.353736  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37797
	I1128 04:12:42.353967  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.353982  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.354364  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.354559  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.355670  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37125
	I1128 04:12:42.355716  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.356215  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.356764  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.356808  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.356772  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.356965  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.356984  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.359122  388252 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1128 04:12:42.357414  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.357431  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.360619  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1128 04:12:42.360666  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1128 04:12:42.360695  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.360632  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.360981  388252 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 04:12:42.361031  388252 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 04:12:42.362951  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.365190  388252 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1128 04:12:42.364654  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.365222  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.365254  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.365285  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.365431  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.367020  388252 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:12:42.367079  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1128 04:12:42.367146  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.367154  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.367365  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.370570  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.371152  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.371177  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.371181  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.371352  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.371712  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.371881  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.381549  388252 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45411
	I1128 04:12:42.382167  388252 main.go:141] libmachine: () Calling .GetVersion
	I1128 04:12:42.382667  388252 main.go:141] libmachine: Using API Version  1
	I1128 04:12:42.382726  388252 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 04:12:42.383173  388252 main.go:141] libmachine: () Calling .GetMachineName
	I1128 04:12:42.383387  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetState
	I1128 04:12:42.384921  388252 main.go:141] libmachine: (embed-certs-672176) Calling .DriverName
	I1128 04:12:42.385265  388252 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I1128 04:12:42.385284  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1128 04:12:42.385305  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHHostname
	I1128 04:12:42.388576  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.389134  388252 main.go:141] libmachine: (embed-certs-672176) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:33:cc", ip: ""} in network mk-embed-certs-672176: {Iface:virbr4 ExpiryTime:2023-11-28 05:07:15 +0000 UTC Type:0 Mac:52:54:00:14:33:cc Iaid: IPaddr:192.168.72.208 Prefix:24 Hostname:embed-certs-672176 Clientid:01:52:54:00:14:33:cc}
	I1128 04:12:42.389197  388252 main.go:141] libmachine: (embed-certs-672176) DBG | domain embed-certs-672176 has defined IP address 192.168.72.208 and MAC address 52:54:00:14:33:cc in network mk-embed-certs-672176
	I1128 04:12:42.389203  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHPort
	I1128 04:12:42.389439  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHKeyPath
	I1128 04:12:42.389617  388252 main.go:141] libmachine: (embed-certs-672176) Calling .GetSSHUsername
	I1128 04:12:42.389783  388252 sshutil.go:53] new ssh client: &{IP:192.168.72.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/embed-certs-672176/id_rsa Username:docker}
	I1128 04:12:42.513762  388252 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-672176" context rescaled to 1 replicas
	I1128 04:12:42.513815  388252 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1128 04:12:42.515768  388252 out.go:177] * Verifying Kubernetes components...
	I1128 04:12:42.517584  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:42.565623  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1128 04:12:42.565648  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1128 04:12:42.583220  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1128 04:12:42.591345  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1128 04:12:42.596578  388252 node_ready.go:35] waiting up to 6m0s for node "embed-certs-672176" to be "Ready" ...
	I1128 04:12:42.596679  388252 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1128 04:12:42.615808  388252 node_ready.go:49] node "embed-certs-672176" has status "Ready":"True"
	I1128 04:12:42.615836  388252 node_ready.go:38] duration metric: took 19.228862ms waiting for node "embed-certs-672176" to be "Ready" ...
	I1128 04:12:42.615848  388252 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:42.637885  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1128 04:12:42.637913  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1128 04:12:42.667328  388252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:42.863842  388252 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:12:42.863897  388252 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1128 04:12:42.947911  388252 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1128 04:12:44.507109  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.923846344s)
	I1128 04:12:44.507207  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.507227  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.507634  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.507655  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.507667  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.507677  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.509371  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:44.509455  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.509479  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.585867  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:44.585899  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:44.586220  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:44.586243  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:44.586371  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:44.829833  388252 pod_ready.go:102] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:45.125413  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.534026387s)
	I1128 04:12:45.125477  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.125492  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.125490  388252 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.528780545s)
	I1128 04:12:45.125516  388252 start.go:926] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I1128 04:12:45.125839  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.125859  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.125874  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.125883  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.126171  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.126184  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.126201  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.429252  388252 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.481263549s)
	I1128 04:12:45.429311  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.429327  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.429703  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.429772  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.429787  388252 main.go:141] libmachine: Making call to close driver server
	I1128 04:12:45.429797  388252 main.go:141] libmachine: (embed-certs-672176) Calling .Close
	I1128 04:12:45.429727  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.430078  388252 main.go:141] libmachine: (embed-certs-672176) DBG | Closing plugin on server side
	I1128 04:12:45.430119  388252 main.go:141] libmachine: Successfully made call to close driver server
	I1128 04:12:45.430135  388252 main.go:141] libmachine: Making call to close connection to plugin binary
	I1128 04:12:45.430149  388252 addons.go:467] Verifying addon metrics-server=true in "embed-certs-672176"
	I1128 04:12:45.432135  388252 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1128 04:12:45.433222  388252 addons.go:502] enable addons completed in 3.122792003s: enabled=[default-storageclass storage-provisioner metrics-server]
	I1128 04:12:46.830144  388252 pod_ready.go:102] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"False"
	I1128 04:12:47.831025  388252 pod_ready.go:92] pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.831057  388252 pod_ready.go:81] duration metric: took 5.163697448s waiting for pod "coredns-5dd5756b68-48xtx" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.831067  388252 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.837379  388252 pod_ready.go:92] pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.837400  388252 pod_ready.go:81] duration metric: took 6.325699ms waiting for pod "coredns-5dd5756b68-qws7p" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.837411  388252 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.842711  388252 pod_ready.go:92] pod "etcd-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.842736  388252 pod_ready.go:81] duration metric: took 5.316988ms waiting for pod "etcd-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.842744  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.848771  388252 pod_ready.go:92] pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.848792  388252 pod_ready.go:81] duration metric: took 6.042201ms waiting for pod "kube-apiserver-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.848801  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.854704  388252 pod_ready.go:92] pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:47.854729  388252 pod_ready.go:81] duration metric: took 5.922154ms waiting for pod "kube-controller-manager-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:47.854737  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-q7srf" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.227290  388252 pod_ready.go:92] pod "kube-proxy-q7srf" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:48.227318  388252 pod_ready.go:81] duration metric: took 372.573682ms waiting for pod "kube-proxy-q7srf" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.227331  388252 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.627054  388252 pod_ready.go:92] pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace has status "Ready":"True"
	I1128 04:12:48.627088  388252 pod_ready.go:81] duration metric: took 399.749681ms waiting for pod "kube-scheduler-embed-certs-672176" in "kube-system" namespace to be "Ready" ...
	I1128 04:12:48.627097  388252 pod_ready.go:38] duration metric: took 6.011238284s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1128 04:12:48.627114  388252 api_server.go:52] waiting for apiserver process to appear ...
	I1128 04:12:48.627164  388252 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 04:12:48.645283  388252 api_server.go:72] duration metric: took 6.131420029s to wait for apiserver process to appear ...
	I1128 04:12:48.645317  388252 api_server.go:88] waiting for apiserver healthz status ...
	I1128 04:12:48.645345  388252 api_server.go:253] Checking apiserver healthz at https://192.168.72.208:8443/healthz ...
	I1128 04:12:48.651616  388252 api_server.go:279] https://192.168.72.208:8443/healthz returned 200:
	ok
	I1128 04:12:48.653231  388252 api_server.go:141] control plane version: v1.28.4
	I1128 04:12:48.653252  388252 api_server.go:131] duration metric: took 7.92759ms to wait for apiserver health ...
	I1128 04:12:48.653262  388252 system_pods.go:43] waiting for kube-system pods to appear ...
	I1128 04:12:48.831400  388252 system_pods.go:59] 9 kube-system pods found
	I1128 04:12:48.831430  388252 system_pods.go:61] "coredns-5dd5756b68-48xtx" [1229f57f-a420-4c63-ae05-8a051f556bbd] Running
	I1128 04:12:48.831435  388252 system_pods.go:61] "coredns-5dd5756b68-qws7p" [19e86a95-23a4-4222-955d-9c560db64c80] Running
	I1128 04:12:48.831439  388252 system_pods.go:61] "etcd-embed-certs-672176" [6591bb2b-2d10-4f8b-9d1a-919b39590717] Running
	I1128 04:12:48.831443  388252 system_pods.go:61] "kube-apiserver-embed-certs-672176" [0ddbb8ba-804f-43ef-a803-62570732f165] Running
	I1128 04:12:48.831447  388252 system_pods.go:61] "kube-controller-manager-embed-certs-672176" [8dcb6ffa-1e26-420f-b385-e145cf24282a] Running
	I1128 04:12:48.831451  388252 system_pods.go:61] "kube-proxy-q7srf" [a2390c61-7f2a-40ac-ad4c-c47e78a3eb90] Running
	I1128 04:12:48.831454  388252 system_pods.go:61] "kube-scheduler-embed-certs-672176" [973e06dd-2716-40fe-99ed-cf7844cd22b7] Running
	I1128 04:12:48.831461  388252 system_pods.go:61] "metrics-server-57f55c9bc5-ppnxv" [1c86fe3d-4460-4777-a7d7-57b1f6aad5f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:12:48.831466  388252 system_pods.go:61] "storage-provisioner" [3304cb38-897a-482f-9a9d-9e287aca2ce4] Running
	I1128 04:12:48.831473  388252 system_pods.go:74] duration metric: took 178.206375ms to wait for pod list to return data ...
	I1128 04:12:48.831481  388252 default_sa.go:34] waiting for default service account to be created ...
	I1128 04:12:49.027724  388252 default_sa.go:45] found service account: "default"
	I1128 04:12:49.027754  388252 default_sa.go:55] duration metric: took 196.266769ms for default service account to be created ...
	I1128 04:12:49.027762  388252 system_pods.go:116] waiting for k8s-apps to be running ...
	I1128 04:12:49.231633  388252 system_pods.go:86] 9 kube-system pods found
	I1128 04:12:49.231663  388252 system_pods.go:89] "coredns-5dd5756b68-48xtx" [1229f57f-a420-4c63-ae05-8a051f556bbd] Running
	I1128 04:12:49.231669  388252 system_pods.go:89] "coredns-5dd5756b68-qws7p" [19e86a95-23a4-4222-955d-9c560db64c80] Running
	I1128 04:12:49.231673  388252 system_pods.go:89] "etcd-embed-certs-672176" [6591bb2b-2d10-4f8b-9d1a-919b39590717] Running
	I1128 04:12:49.231677  388252 system_pods.go:89] "kube-apiserver-embed-certs-672176" [0ddbb8ba-804f-43ef-a803-62570732f165] Running
	I1128 04:12:49.231682  388252 system_pods.go:89] "kube-controller-manager-embed-certs-672176" [8dcb6ffa-1e26-420f-b385-e145cf24282a] Running
	I1128 04:12:49.231687  388252 system_pods.go:89] "kube-proxy-q7srf" [a2390c61-7f2a-40ac-ad4c-c47e78a3eb90] Running
	I1128 04:12:49.231691  388252 system_pods.go:89] "kube-scheduler-embed-certs-672176" [973e06dd-2716-40fe-99ed-cf7844cd22b7] Running
	I1128 04:12:49.231697  388252 system_pods.go:89] "metrics-server-57f55c9bc5-ppnxv" [1c86fe3d-4460-4777-a7d7-57b1f6aad5f6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1128 04:12:49.231702  388252 system_pods.go:89] "storage-provisioner" [3304cb38-897a-482f-9a9d-9e287aca2ce4] Running
	I1128 04:12:49.231712  388252 system_pods.go:126] duration metric: took 203.944338ms to wait for k8s-apps to be running ...
	I1128 04:12:49.231724  388252 system_svc.go:44] waiting for kubelet service to be running ....
	I1128 04:12:49.231781  388252 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 04:12:49.247634  388252 system_svc.go:56] duration metric: took 15.898994ms WaitForService to wait for kubelet.
	I1128 04:12:49.247662  388252 kubeadm.go:581] duration metric: took 6.733807391s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1128 04:12:49.247681  388252 node_conditions.go:102] verifying NodePressure condition ...
	I1128 04:12:49.426882  388252 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I1128 04:12:49.426916  388252 node_conditions.go:123] node cpu capacity is 2
	I1128 04:12:49.426931  388252 node_conditions.go:105] duration metric: took 179.246183ms to run NodePressure ...
	I1128 04:12:49.426946  388252 start.go:228] waiting for startup goroutines ...
	I1128 04:12:49.426954  388252 start.go:233] waiting for cluster config update ...
	I1128 04:12:49.426965  388252 start.go:242] writing updated cluster config ...
	I1128 04:12:49.427242  388252 ssh_runner.go:195] Run: rm -f paused
	I1128 04:12:49.477142  388252 start.go:600] kubectl: 1.28.4, cluster: 1.28.4 (minor skew: 0)
	I1128 04:12:49.479448  388252 out.go:177] * Done! kubectl is now configured to use "embed-certs-672176" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Tue 2023-11-28 04:07:14 UTC, ends at Tue 2023-11-28 04:27:20 UTC. --
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.335520439Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145640335504388,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=8dd4d328-dce1-4b9a-9a4f-b99bfb8eae35 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.336222509Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eca3e727-bcc1-4c7e-b656-7c3b578c16db name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.336267460Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eca3e727-bcc1-4c7e-b656-7c3b578c16db name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.336434131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55999b180e46a76966250ba02a06f767d8185eb676c1d8bd4393a6ce89fa5cac,PodSandboxId:23bae9ebe757911e97d850acd1e83c87d549534a35aee4e2d685a72243ab09ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144766550473149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3304cb38-897a-482f-9a9d-9e287aca2ce4,},Annotations:map[string]string{io.kubernetes.container.hash: 445f6fd1,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e30da1c07f5793d557d04de279fd4d9ce1931f27c97b97b275f31a48143d89,PodSandboxId:543a0c27fed3c6d9bc8ea93968c8cb33660b8e04f1ea6753918bee7925a26551,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701144766312074488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q7srf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2390c61-7f2a-40ac-ad4c-c47e78a3eb90,},Annotations:map[string]string{io.kubernetes.container.hash: af22f2f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c92742b6cfdb9b2ac289826db97ae444b243c2f90a72e637600b1ef09a074a,PodSandboxId:dc2637252b7291cd0f933af06d585af6f4fa3933e0ee8e8657f4eea153fb8d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701144765021430249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-48xtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1229f57f-a420-4c63-ae05-8a051f556bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce06910,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3340b3a65d740678e203057d37b3555c365e618e9ace218331036d27fef381,PodSandboxId:38cde89d5ac454baad86c9b2291323e91c147e1a880a2147a02767061d1e5eea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701144742307516053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945ebe6e328796beac07f4450a6ecc1a,},An
notations:map[string]string{io.kubernetes.container.hash: 650dd1c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6169d1fa99a35f9b90bbfd581625b4927362b33dc636a21b075ff8d0e5c72173,PodSandboxId:50ac62d0c2bf7e2f7bcf0c38dd54f957695cb5c7a42fb4b3d8bdd3b576aac8ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701144741977084095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daba830fcaa1e18d3e7bb86bc4870c88,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01cf63ee243316a13bbd80f2ece1c3df00cfe1ae1c5b2bff1459399c59c67300,PodSandboxId:e703dda3489554ca5b519d5ac8ff7cc862d9f5810d7e7fb98641129ec1c19ca9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701144741617905673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570e83de020514716d22d1c764157ee0,},Annotations:map[string
]string{io.kubernetes.container.hash: 3659388e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81885b2b8dd1c8c624ec9132d877ac32987196c7adc4df1c1c3c3a35c6cc2f1,PodSandboxId:a332ada34261b1db808fad8aced9996a6e0b463007904aeecb477b26ff6e7572,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701144741426845203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fe46fb7a0db54841bf1ee918ac8f63
3,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eca3e727-bcc1-4c7e-b656-7c3b578c16db name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.373657874Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=ac1a0068-88a9-4f44-a5e5-08856f8b27c1 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.373741258Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ac1a0068-88a9-4f44-a5e5-08856f8b27c1 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.374754156Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=5675f091-c7c4-40e9-9534-5f563bbfb3ea name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.375252306Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145640375237792,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=5675f091-c7c4-40e9-9534-5f563bbfb3ea name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.375745921Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b9ca8c49-4b2f-4ad4-ade3-a85187098864 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.375796431Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b9ca8c49-4b2f-4ad4-ade3-a85187098864 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.375944972Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55999b180e46a76966250ba02a06f767d8185eb676c1d8bd4393a6ce89fa5cac,PodSandboxId:23bae9ebe757911e97d850acd1e83c87d549534a35aee4e2d685a72243ab09ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144766550473149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3304cb38-897a-482f-9a9d-9e287aca2ce4,},Annotations:map[string]string{io.kubernetes.container.hash: 445f6fd1,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e30da1c07f5793d557d04de279fd4d9ce1931f27c97b97b275f31a48143d89,PodSandboxId:543a0c27fed3c6d9bc8ea93968c8cb33660b8e04f1ea6753918bee7925a26551,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701144766312074488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q7srf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2390c61-7f2a-40ac-ad4c-c47e78a3eb90,},Annotations:map[string]string{io.kubernetes.container.hash: af22f2f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c92742b6cfdb9b2ac289826db97ae444b243c2f90a72e637600b1ef09a074a,PodSandboxId:dc2637252b7291cd0f933af06d585af6f4fa3933e0ee8e8657f4eea153fb8d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701144765021430249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-48xtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1229f57f-a420-4c63-ae05-8a051f556bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce06910,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3340b3a65d740678e203057d37b3555c365e618e9ace218331036d27fef381,PodSandboxId:38cde89d5ac454baad86c9b2291323e91c147e1a880a2147a02767061d1e5eea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701144742307516053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945ebe6e328796beac07f4450a6ecc1a,},An
notations:map[string]string{io.kubernetes.container.hash: 650dd1c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6169d1fa99a35f9b90bbfd581625b4927362b33dc636a21b075ff8d0e5c72173,PodSandboxId:50ac62d0c2bf7e2f7bcf0c38dd54f957695cb5c7a42fb4b3d8bdd3b576aac8ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701144741977084095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daba830fcaa1e18d3e7bb86bc4870c88,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01cf63ee243316a13bbd80f2ece1c3df00cfe1ae1c5b2bff1459399c59c67300,PodSandboxId:e703dda3489554ca5b519d5ac8ff7cc862d9f5810d7e7fb98641129ec1c19ca9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701144741617905673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570e83de020514716d22d1c764157ee0,},Annotations:map[string
]string{io.kubernetes.container.hash: 3659388e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81885b2b8dd1c8c624ec9132d877ac32987196c7adc4df1c1c3c3a35c6cc2f1,PodSandboxId:a332ada34261b1db808fad8aced9996a6e0b463007904aeecb477b26ff6e7572,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701144741426845203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fe46fb7a0db54841bf1ee918ac8f63
3,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b9ca8c49-4b2f-4ad4-ade3-a85187098864 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.415952298Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6497ffc4-e000-4cd3-a9e7-51bf0f9b7343 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.416119921Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6497ffc4-e000-4cd3-a9e7-51bf0f9b7343 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.417296558Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fb67777a-16d4-46d3-970b-d1486856aa28 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.417714083Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145640417699280,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=fb67777a-16d4-46d3-970b-d1486856aa28 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.418645998Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5abb4fc8-3fa5-4b77-aad5-ccdbe7e18087 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.418717717Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5abb4fc8-3fa5-4b77-aad5-ccdbe7e18087 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.418989603Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55999b180e46a76966250ba02a06f767d8185eb676c1d8bd4393a6ce89fa5cac,PodSandboxId:23bae9ebe757911e97d850acd1e83c87d549534a35aee4e2d685a72243ab09ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144766550473149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3304cb38-897a-482f-9a9d-9e287aca2ce4,},Annotations:map[string]string{io.kubernetes.container.hash: 445f6fd1,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e30da1c07f5793d557d04de279fd4d9ce1931f27c97b97b275f31a48143d89,PodSandboxId:543a0c27fed3c6d9bc8ea93968c8cb33660b8e04f1ea6753918bee7925a26551,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701144766312074488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q7srf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2390c61-7f2a-40ac-ad4c-c47e78a3eb90,},Annotations:map[string]string{io.kubernetes.container.hash: af22f2f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c92742b6cfdb9b2ac289826db97ae444b243c2f90a72e637600b1ef09a074a,PodSandboxId:dc2637252b7291cd0f933af06d585af6f4fa3933e0ee8e8657f4eea153fb8d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701144765021430249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-48xtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1229f57f-a420-4c63-ae05-8a051f556bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce06910,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3340b3a65d740678e203057d37b3555c365e618e9ace218331036d27fef381,PodSandboxId:38cde89d5ac454baad86c9b2291323e91c147e1a880a2147a02767061d1e5eea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701144742307516053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945ebe6e328796beac07f4450a6ecc1a,},An
notations:map[string]string{io.kubernetes.container.hash: 650dd1c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6169d1fa99a35f9b90bbfd581625b4927362b33dc636a21b075ff8d0e5c72173,PodSandboxId:50ac62d0c2bf7e2f7bcf0c38dd54f957695cb5c7a42fb4b3d8bdd3b576aac8ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701144741977084095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daba830fcaa1e18d3e7bb86bc4870c88,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01cf63ee243316a13bbd80f2ece1c3df00cfe1ae1c5b2bff1459399c59c67300,PodSandboxId:e703dda3489554ca5b519d5ac8ff7cc862d9f5810d7e7fb98641129ec1c19ca9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701144741617905673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570e83de020514716d22d1c764157ee0,},Annotations:map[string
]string{io.kubernetes.container.hash: 3659388e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81885b2b8dd1c8c624ec9132d877ac32987196c7adc4df1c1c3c3a35c6cc2f1,PodSandboxId:a332ada34261b1db808fad8aced9996a6e0b463007904aeecb477b26ff6e7572,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701144741426845203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fe46fb7a0db54841bf1ee918ac8f63
3,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5abb4fc8-3fa5-4b77-aad5-ccdbe7e18087 name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.451203170Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a06f90cc-0323-4c9b-b0ce-18d4fac9e9a5 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.451269172Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a06f90cc-0323-4c9b-b0ce-18d4fac9e9a5 name=/runtime.v1.RuntimeService/Version
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.452786083Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=18d0e5ba-55b3-469d-b357-322d3ea76755 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.453240270Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1701145640453225752,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=18d0e5ba-55b3-469d-b357-322d3ea76755 name=/runtime.v1.ImageService/ImageFsInfo
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.453915725Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7afdb8c2-53ef-4f63-bd87-a0bc7055867f name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.453984443Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7afdb8c2-53ef-4f63-bd87-a0bc7055867f name=/runtime.v1.RuntimeService/ListContainers
	Nov 28 04:27:20 embed-certs-672176 crio[710]: time="2023-11-28 04:27:20.454242329Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:55999b180e46a76966250ba02a06f767d8185eb676c1d8bd4393a6ce89fa5cac,PodSandboxId:23bae9ebe757911e97d850acd1e83c87d549534a35aee4e2d685a72243ab09ef,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1701144766550473149,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3304cb38-897a-482f-9a9d-9e287aca2ce4,},Annotations:map[string]string{io.kubernetes.container.hash: 445f6fd1,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4e30da1c07f5793d557d04de279fd4d9ce1931f27c97b97b275f31a48143d89,PodSandboxId:543a0c27fed3c6d9bc8ea93968c8cb33660b8e04f1ea6753918bee7925a26551,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1701144766312074488,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-q7srf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a2390c61-7f2a-40ac-ad4c-c47e78a3eb90,},Annotations:map[string]string{io.kubernetes.container.hash: af22f2f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:91c92742b6cfdb9b2ac289826db97ae444b243c2f90a72e637600b1ef09a074a,PodSandboxId:dc2637252b7291cd0f933af06d585af6f4fa3933e0ee8e8657f4eea153fb8d93,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1701144765021430249,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-48xtx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1229f57f-a420-4c63-ae05-8a051f556bbd,},Annotations:map[string]string{io.kubernetes.container.hash: 3ce06910,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc3340b3a65d740678e203057d37b3555c365e618e9ace218331036d27fef381,PodSandboxId:38cde89d5ac454baad86c9b2291323e91c147e1a880a2147a02767061d1e5eea,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1701144742307516053,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 945ebe6e328796beac07f4450a6ecc1a,},An
notations:map[string]string{io.kubernetes.container.hash: 650dd1c8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6169d1fa99a35f9b90bbfd581625b4927362b33dc636a21b075ff8d0e5c72173,PodSandboxId:50ac62d0c2bf7e2f7bcf0c38dd54f957695cb5c7a42fb4b3d8bdd3b576aac8ee,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1701144741977084095,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daba830fcaa1e18d3e7bb86bc4870c88,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:01cf63ee243316a13bbd80f2ece1c3df00cfe1ae1c5b2bff1459399c59c67300,PodSandboxId:e703dda3489554ca5b519d5ac8ff7cc862d9f5810d7e7fb98641129ec1c19ca9,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1701144741617905673,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 570e83de020514716d22d1c764157ee0,},Annotations:map[string
]string{io.kubernetes.container.hash: 3659388e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f81885b2b8dd1c8c624ec9132d877ac32987196c7adc4df1c1c3c3a35c6cc2f1,PodSandboxId:a332ada34261b1db808fad8aced9996a6e0b463007904aeecb477b26ff6e7572,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1701144741426845203,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-672176,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7fe46fb7a0db54841bf1ee918ac8f63
3,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7afdb8c2-53ef-4f63-bd87-a0bc7055867f name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	55999b180e46a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   23bae9ebe7579       storage-provisioner
	c4e30da1c07f5       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 minutes ago      Running             kube-proxy                0                   543a0c27fed3c       kube-proxy-q7srf
	91c92742b6cfd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   dc2637252b729       coredns-5dd5756b68-48xtx
	fc3340b3a65d7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   38cde89d5ac45       etcd-embed-certs-672176
	6169d1fa99a35       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   14 minutes ago      Running             kube-scheduler            2                   50ac62d0c2bf7       kube-scheduler-embed-certs-672176
	01cf63ee24331       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   14 minutes ago      Running             kube-apiserver            2                   e703dda348955       kube-apiserver-embed-certs-672176
	f81885b2b8dd1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   14 minutes ago      Running             kube-controller-manager   2                   a332ada34261b       kube-controller-manager-embed-certs-672176
	
	* 
	* ==> coredns [91c92742b6cfdb9b2ac289826db97ae444b243c2f90a72e637600b1ef09a074a] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:39443 - 58819 "HINFO IN 8585624031149724927.8956002308733853960. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029917572s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-672176
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-672176
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=4b46ba7921457e6b2056c8a8c7d7cb78b2aad6e9
	                    minikube.k8s.io/name=embed-certs-672176
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_11_28T04_12_30_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 28 Nov 2023 04:12:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-672176
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 28 Nov 2023 04:27:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 28 Nov 2023 04:23:02 +0000   Tue, 28 Nov 2023 04:12:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 28 Nov 2023 04:23:02 +0000   Tue, 28 Nov 2023 04:12:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 28 Nov 2023 04:23:02 +0000   Tue, 28 Nov 2023 04:12:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 28 Nov 2023 04:23:02 +0000   Tue, 28 Nov 2023 04:12:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.208
	  Hostname:    embed-certs-672176
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a5cf2aae5a434ee495cec6b9bb579e26
	  System UUID:                a5cf2aae-5a43-4ee4-95ce-c6b9bb579e26
	  Boot ID:                    532f93ee-13ec-4e00-80cb-8b2b44b5a139
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-48xtx                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-embed-certs-672176                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-embed-certs-672176             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-embed-certs-672176    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-q7srf                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-embed-certs-672176             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-ppnxv               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node embed-certs-672176 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node embed-certs-672176 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node embed-certs-672176 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m   kubelet          Node embed-certs-672176 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m   kubelet          Node embed-certs-672176 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node embed-certs-672176 event: Registered Node embed-certs-672176 in Controller
	
	* 
	* ==> dmesg <==
	* [Nov28 04:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.069382] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.485292] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.676842] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.156986] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.685488] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000010] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.309157] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.123275] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.165877] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.126326] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.232700] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +17.873368] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[Nov28 04:08] kauditd_printk_skb: 29 callbacks suppressed
	[Nov28 04:12] systemd-fstab-generator[3499]: Ignoring "noauto" for root device
	[  +9.804360] systemd-fstab-generator[3825]: Ignoring "noauto" for root device
	[ +12.996425] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.656013] kauditd_printk_skb: 9 callbacks suppressed
	
	* 
	* ==> etcd [fc3340b3a65d740678e203057d37b3555c365e618e9ace218331036d27fef381] <==
	* {"level":"info","ts":"2023-11-28T04:12:23.507081Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.208:2380"}
	{"level":"info","ts":"2023-11-28T04:12:23.507241Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.208:2380"}
	{"level":"info","ts":"2023-11-28T04:12:23.511209Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"d89ba707b55b57db","initial-advertise-peer-urls":["https://192.168.72.208:2380"],"listen-peer-urls":["https://192.168.72.208:2380"],"advertise-client-urls":["https://192.168.72.208:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.208:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-11-28T04:12:23.51342Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-11-28T04:12:23.916762Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d89ba707b55b57db is starting a new election at term 1"}
	{"level":"info","ts":"2023-11-28T04:12:23.916881Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d89ba707b55b57db became pre-candidate at term 1"}
	{"level":"info","ts":"2023-11-28T04:12:23.916934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d89ba707b55b57db received MsgPreVoteResp from d89ba707b55b57db at term 1"}
	{"level":"info","ts":"2023-11-28T04:12:23.917232Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d89ba707b55b57db became candidate at term 2"}
	{"level":"info","ts":"2023-11-28T04:12:23.917263Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d89ba707b55b57db received MsgVoteResp from d89ba707b55b57db at term 2"}
	{"level":"info","ts":"2023-11-28T04:12:23.917299Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d89ba707b55b57db became leader at term 2"}
	{"level":"info","ts":"2023-11-28T04:12:23.917328Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d89ba707b55b57db elected leader d89ba707b55b57db at term 2"}
	{"level":"info","ts":"2023-11-28T04:12:23.920349Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"d89ba707b55b57db","local-member-attributes":"{Name:embed-certs-672176 ClientURLs:[https://192.168.72.208:2379]}","request-path":"/0/members/d89ba707b55b57db/attributes","cluster-id":"390b9e353a6e0025","publish-timeout":"7s"}
	{"level":"info","ts":"2023-11-28T04:12:23.920664Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:12:23.921171Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T04:12:23.924165Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-11-28T04:12:23.924226Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-11-28T04:12:23.924279Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"390b9e353a6e0025","local-member-id":"d89ba707b55b57db","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:12:23.924402Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:12:23.924443Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-11-28T04:12:23.924483Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-11-28T04:12:23.924814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-11-28T04:12:23.925577Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.208:2379"}
	{"level":"info","ts":"2023-11-28T04:22:24.499353Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":692}
	{"level":"info","ts":"2023-11-28T04:22:24.501888Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":692,"took":"2.012813ms","hash":3544821393}
	{"level":"info","ts":"2023-11-28T04:22:24.501972Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3544821393,"revision":692,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  04:27:20 up 20 min,  0 users,  load average: 0.18, 0.29, 0.24
	Linux embed-certs-672176 5.10.57 #1 SMP Thu Nov 16 18:26:12 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [01cf63ee243316a13bbd80f2ece1c3df00cfe1ae1c5b2bff1459399c59c67300] <==
	* W1128 04:22:27.396907       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:22:27.396963       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:22:27.396970       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:22:27.397173       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:22:27.397288       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:22:27.398591       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:23:26.240108       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 04:23:27.398272       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:23:27.398406       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:23:27.398454       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:23:27.399503       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:23:27.399598       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:23:27.399611       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:24:26.240758       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1128 04:25:26.240262       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1128 04:25:27.399118       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:25:27.399168       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1128 04:25:27.399181       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1128 04:25:27.400353       1 handler_proxy.go:93] no RequestInfo found in the context
	E1128 04:25:27.400476       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1128 04:25:27.400539       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1128 04:26:26.240170       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [f81885b2b8dd1c8c624ec9132d877ac32987196c7adc4df1c1c3c3a35c6cc2f1] <==
	* I1128 04:21:42.051275       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:22:11.527268       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:22:12.067847       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:22:41.533739       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:22:42.077501       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:23:11.539752       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:23:12.085684       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:23:41.546588       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:23:42.093944       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1128 04:23:55.056305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="357.355µs"
	I1128 04:24:07.056702       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="134.432µs"
	E1128 04:24:11.552808       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:24:12.102837       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:24:41.559464       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:24:42.112169       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:25:11.565576       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:25:12.120800       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:25:41.571361       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:25:42.130190       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:26:11.577542       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:26:12.138508       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:26:41.584104       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:26:42.147503       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1128 04:27:11.590428       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1128 04:27:12.156238       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [c4e30da1c07f5793d557d04de279fd4d9ce1931f27c97b97b275f31a48143d89] <==
	* I1128 04:12:46.803514       1 server_others.go:69] "Using iptables proxy"
	I1128 04:12:46.826148       1 node.go:141] Successfully retrieved node IP: 192.168.72.208
	I1128 04:12:46.885971       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1128 04:12:46.886067       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1128 04:12:46.889127       1 server_others.go:152] "Using iptables Proxier"
	I1128 04:12:46.889660       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1128 04:12:46.889900       1 server.go:846] "Version info" version="v1.28.4"
	I1128 04:12:46.889936       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1128 04:12:46.891869       1 config.go:188] "Starting service config controller"
	I1128 04:12:46.892907       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1128 04:12:46.893399       1 config.go:97] "Starting endpoint slice config controller"
	I1128 04:12:46.893439       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1128 04:12:46.895969       1 config.go:315] "Starting node config controller"
	I1128 04:12:46.896089       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1128 04:12:46.993905       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1128 04:12:46.993906       1 shared_informer.go:318] Caches are synced for service config
	I1128 04:12:46.996213       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [6169d1fa99a35f9b90bbfd581625b4927362b33dc636a21b075ff8d0e5c72173] <==
	* E1128 04:12:26.401724       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1128 04:12:26.401733       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 04:12:26.401740       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1128 04:12:26.401751       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 04:12:26.401760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1128 04:12:26.402392       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1128 04:12:26.408859       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 04:12:26.408938       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 04:12:27.309448       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1128 04:12:27.309553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1128 04:12:27.328746       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1128 04:12:27.328814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1128 04:12:27.372976       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1128 04:12:27.373099       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1128 04:12:27.388730       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1128 04:12:27.388902       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1128 04:12:27.626879       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1128 04:12:27.626969       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1128 04:12:27.635190       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1128 04:12:27.635273       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1128 04:12:27.697998       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1128 04:12:27.698296       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1128 04:12:27.723773       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1128 04:12:27.723857       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I1128 04:12:30.185112       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Tue 2023-11-28 04:07:14 UTC, ends at Tue 2023-11-28 04:27:20 UTC. --
	Nov 28 04:24:30 embed-certs-672176 kubelet[3832]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:24:30 embed-certs-672176 kubelet[3832]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:24:30 embed-certs-672176 kubelet[3832]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:24:31 embed-certs-672176 kubelet[3832]: E1128 04:24:31.038653    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:24:45 embed-certs-672176 kubelet[3832]: E1128 04:24:45.039385    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:25:00 embed-certs-672176 kubelet[3832]: E1128 04:25:00.039152    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:25:14 embed-certs-672176 kubelet[3832]: E1128 04:25:14.039490    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:25:27 embed-certs-672176 kubelet[3832]: E1128 04:25:27.039341    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:25:30 embed-certs-672176 kubelet[3832]: E1128 04:25:30.120694    3832 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:25:30 embed-certs-672176 kubelet[3832]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:25:30 embed-certs-672176 kubelet[3832]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:25:30 embed-certs-672176 kubelet[3832]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:25:38 embed-certs-672176 kubelet[3832]: E1128 04:25:38.039311    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:25:52 embed-certs-672176 kubelet[3832]: E1128 04:25:52.039190    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:26:03 embed-certs-672176 kubelet[3832]: E1128 04:26:03.038840    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:26:15 embed-certs-672176 kubelet[3832]: E1128 04:26:15.038750    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:26:28 embed-certs-672176 kubelet[3832]: E1128 04:26:28.039663    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:26:30 embed-certs-672176 kubelet[3832]: E1128 04:26:30.120284    3832 iptables.go:575] "Could not set up iptables canary" err=<
	Nov 28 04:26:30 embed-certs-672176 kubelet[3832]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Nov 28 04:26:30 embed-certs-672176 kubelet[3832]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Nov 28 04:26:30 embed-certs-672176 kubelet[3832]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Nov 28 04:26:42 embed-certs-672176 kubelet[3832]: E1128 04:26:42.040193    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:26:56 embed-certs-672176 kubelet[3832]: E1128 04:26:56.040309    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:27:08 embed-certs-672176 kubelet[3832]: E1128 04:27:08.038162    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	Nov 28 04:27:20 embed-certs-672176 kubelet[3832]: E1128 04:27:20.039172    3832 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-ppnxv" podUID="1c86fe3d-4460-4777-a7d7-57b1f6aad5f6"
	
	* 
	* ==> storage-provisioner [55999b180e46a76966250ba02a06f767d8185eb676c1d8bd4393a6ce89fa5cac] <==
	* I1128 04:12:46.708764       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1128 04:12:46.721336       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1128 04:12:46.721428       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1128 04:12:46.732630       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1128 04:12:46.733424       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-672176_dd91b5c6-ccfc-42f8-9afd-74c05f48e689!
	I1128 04:12:46.735780       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"374715d4-9bc6-4746-ae44-37fdb42dadbd", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-672176_dd91b5c6-ccfc-42f8-9afd-74c05f48e689 became leader
	I1128 04:12:46.834587       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-672176_dd91b5c6-ccfc-42f8-9afd-74c05f48e689!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-672176 -n embed-certs-672176
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-672176 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-ppnxv
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-672176 describe pod metrics-server-57f55c9bc5-ppnxv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-672176 describe pod metrics-server-57f55c9bc5-ppnxv: exit status 1 (66.227538ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-ppnxv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-672176 describe pod metrics-server-57f55c9bc5-ppnxv: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (329.41s)

                                                
                                    

Test pass (238/304)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 7.74
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
10 TestDownloadOnly/v1.28.4/json-events 6.32
11 TestDownloadOnly/v1.28.4/preload-exists 0
15 TestDownloadOnly/v1.28.4/LogsDuration 0.08
17 TestDownloadOnly/v1.29.0-rc.0/json-events 5.75
18 TestDownloadOnly/v1.29.0-rc.0/preload-exists 0
22 TestDownloadOnly/v1.29.0-rc.0/LogsDuration 0.08
23 TestDownloadOnly/DeleteAll 0.15
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.14
26 TestBinaryMirror 0.59
27 TestOffline 104.58
30 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
31 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
32 TestAddons/Setup 149.65
34 TestAddons/parallel/Registry 14.8
36 TestAddons/parallel/InspektorGadget 11.07
37 TestAddons/parallel/MetricsServer 5.95
38 TestAddons/parallel/HelmTiller 10.9
40 TestAddons/parallel/CSI 96.26
41 TestAddons/parallel/Headlamp 14.35
42 TestAddons/parallel/CloudSpanner 5.6
43 TestAddons/parallel/LocalPath 55.98
44 TestAddons/parallel/NvidiaDevicePlugin 5.55
47 TestAddons/serial/GCPAuth/Namespaces 0.12
49 TestCertOptions 75.92
50 TestCertExpiration 296.23
52 TestForceSystemdFlag 60.95
53 TestForceSystemdEnv 92.52
55 TestKVMDriverInstallOrUpdate 3
59 TestErrorSpam/setup 49.03
60 TestErrorSpam/start 0.4
61 TestErrorSpam/status 0.81
62 TestErrorSpam/pause 1.63
63 TestErrorSpam/unpause 1.8
64 TestErrorSpam/stop 2.29
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 97.38
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 39.67
71 TestFunctional/serial/KubeContext 0.04
72 TestFunctional/serial/KubectlGetPods 0.08
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.27
76 TestFunctional/serial/CacheCmd/cache/add_local 1.57
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.78
81 TestFunctional/serial/CacheCmd/cache/delete 0.13
82 TestFunctional/serial/MinikubeKubectlCmd 0.13
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
84 TestFunctional/serial/ExtraConfig 35.27
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.48
87 TestFunctional/serial/LogsFileCmd 1.58
88 TestFunctional/serial/InvalidService 4.81
90 TestFunctional/parallel/ConfigCmd 0.46
91 TestFunctional/parallel/DashboardCmd 14.65
92 TestFunctional/parallel/DryRun 0.29
93 TestFunctional/parallel/InternationalLanguage 0.15
94 TestFunctional/parallel/StatusCmd 1.1
98 TestFunctional/parallel/ServiceCmdConnect 11.62
99 TestFunctional/parallel/AddonsCmd 0.16
100 TestFunctional/parallel/PersistentVolumeClaim 40.64
102 TestFunctional/parallel/SSHCmd 0.57
103 TestFunctional/parallel/CpCmd 1.12
104 TestFunctional/parallel/MySQL 28.8
105 TestFunctional/parallel/FileSync 0.3
106 TestFunctional/parallel/CertSync 1.69
110 TestFunctional/parallel/NodeLabels 0.09
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
114 TestFunctional/parallel/License 0.2
115 TestFunctional/parallel/Version/short 0.08
116 TestFunctional/parallel/Version/components 1.01
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.43
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.43
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.43
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.39
121 TestFunctional/parallel/ImageCommands/ImageBuild 4.99
122 TestFunctional/parallel/ImageCommands/Setup 0.92
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
126 TestFunctional/parallel/ServiceCmd/DeployApp 13.29
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 6.19
137 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.56
138 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 7.73
139 TestFunctional/parallel/ServiceCmd/List 0.52
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.64
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
142 TestFunctional/parallel/ServiceCmd/Format 0.52
143 TestFunctional/parallel/ServiceCmd/URL 0.66
144 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.11
145 TestFunctional/parallel/ProfileCmd/profile_not_create 0.3
146 TestFunctional/parallel/ProfileCmd/profile_list 0.42
147 TestFunctional/parallel/ProfileCmd/profile_json_output 0.49
148 TestFunctional/parallel/MountCmd/any-port 19.23
149 TestFunctional/parallel/ImageCommands/ImageRemove 1.65
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 6.55
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 6.29
152 TestFunctional/parallel/MountCmd/specific-port 2.09
153 TestFunctional/parallel/MountCmd/VerifyCleanup 1.84
154 TestFunctional/delete_addon-resizer_images 0.07
155 TestFunctional/delete_my-image_image 0.02
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestIngressAddonLegacy/StartLegacyK8sCluster 105.81
162 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 13.06
163 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.59
167 TestJSONOutput/start/Command 98.48
168 TestJSONOutput/start/Audit 0
170 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/pause/Command 0.7
174 TestJSONOutput/pause/Audit 0
176 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
179 TestJSONOutput/unpause/Command 0.65
180 TestJSONOutput/unpause/Audit 0
182 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
183 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
185 TestJSONOutput/stop/Command 7.11
186 TestJSONOutput/stop/Audit 0
188 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
190 TestErrorJSONOutput 0.23
195 TestMainNoArgs 0.06
196 TestMinikubeProfile 97.77
199 TestMountStart/serial/StartWithMountFirst 27.08
200 TestMountStart/serial/VerifyMountFirst 0.42
201 TestMountStart/serial/StartWithMountSecond 26.63
202 TestMountStart/serial/VerifyMountSecond 0.43
203 TestMountStart/serial/DeleteFirst 0.68
204 TestMountStart/serial/VerifyMountPostDelete 0.43
205 TestMountStart/serial/Stop 1.17
206 TestMountStart/serial/RestartStopped 21.89
207 TestMountStart/serial/VerifyMountPostStop 0.43
210 TestMultiNode/serial/FreshStart2Nodes 118.75
211 TestMultiNode/serial/DeployApp2Nodes 4.27
213 TestMultiNode/serial/AddNode 48.77
214 TestMultiNode/serial/ProfileList 0.22
215 TestMultiNode/serial/CopyFile 7.92
216 TestMultiNode/serial/StopNode 3.01
217 TestMultiNode/serial/StartAfterStop 29.77
219 TestMultiNode/serial/DeleteNode 1.53
221 TestMultiNode/serial/RestartMultiNode 439.35
222 TestMultiNode/serial/ValidateNameConflict 53.43
229 TestScheduledStopUnix 120.52
235 TestKubernetesUpgrade 164.26
238 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
242 TestNoKubernetes/serial/StartWithK8s 77.31
247 TestNetworkPlugins/group/false 3.89
251 TestNoKubernetes/serial/StartWithStopK8s 11.79
252 TestNoKubernetes/serial/Start 27.64
253 TestNoKubernetes/serial/VerifyK8sNotRunning 0.23
254 TestNoKubernetes/serial/ProfileList 0.44
255 TestNoKubernetes/serial/Stop 1.18
256 TestNoKubernetes/serial/StartNoArgs 72.01
257 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
258 TestStoppedBinaryUpgrade/Setup 0.35
268 TestPause/serial/Start 108.53
269 TestPause/serial/SecondStartNoReconfiguration 29.96
270 TestNetworkPlugins/group/auto/Start 103.64
271 TestPause/serial/Pause 0.78
272 TestPause/serial/VerifyStatus 0.29
273 TestPause/serial/Unpause 0.8
274 TestPause/serial/PauseAgain 1.06
275 TestPause/serial/DeletePaused 1.08
276 TestPause/serial/VerifyDeletedResources 0.4
277 TestNetworkPlugins/group/kindnet/Start 75.7
278 TestNetworkPlugins/group/calico/Start 107.85
279 TestNetworkPlugins/group/auto/KubeletFlags 0.25
280 TestNetworkPlugins/group/auto/NetCatPod 11.47
281 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
282 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
283 TestNetworkPlugins/group/kindnet/NetCatPod 11.48
284 TestNetworkPlugins/group/auto/DNS 0.21
285 TestNetworkPlugins/group/auto/Localhost 0.15
286 TestNetworkPlugins/group/auto/HairPin 0.15
287 TestNetworkPlugins/group/kindnet/DNS 0.2
288 TestNetworkPlugins/group/kindnet/Localhost 0.23
289 TestNetworkPlugins/group/kindnet/HairPin 0.17
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.82
291 TestNetworkPlugins/group/custom-flannel/Start 91.52
292 TestNetworkPlugins/group/enable-default-cni/Start 128.65
293 TestNetworkPlugins/group/flannel/Start 130.75
294 TestNetworkPlugins/group/calico/ControllerPod 5.03
295 TestNetworkPlugins/group/calico/KubeletFlags 0.3
296 TestNetworkPlugins/group/calico/NetCatPod 15.66
297 TestNetworkPlugins/group/calico/DNS 0.18
298 TestNetworkPlugins/group/calico/Localhost 0.14
299 TestNetworkPlugins/group/calico/HairPin 0.19
300 TestNetworkPlugins/group/bridge/Start 122.55
301 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
302 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.49
303 TestNetworkPlugins/group/custom-flannel/DNS 0.17
304 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
305 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
307 TestStartStop/group/old-k8s-version/serial/FirstStart 136.19
308 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.26
309 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.44
310 TestNetworkPlugins/group/flannel/ControllerPod 5.03
311 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
312 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
313 TestNetworkPlugins/group/flannel/KubeletFlags 0.25
314 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
315 TestNetworkPlugins/group/flannel/NetCatPod 13.45
316 TestNetworkPlugins/group/flannel/DNS 0.25
317 TestNetworkPlugins/group/flannel/Localhost 0.22
318 TestNetworkPlugins/group/flannel/HairPin 0.21
320 TestStartStop/group/no-preload/serial/FirstStart 126.7
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 116.97
323 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
324 TestNetworkPlugins/group/bridge/NetCatPod 12.41
325 TestNetworkPlugins/group/bridge/DNS 0.27
326 TestNetworkPlugins/group/bridge/Localhost 0.19
327 TestNetworkPlugins/group/bridge/HairPin 0.18
329 TestStartStop/group/newest-cni/serial/FirstStart 65.08
330 TestStartStop/group/old-k8s-version/serial/DeployApp 8.48
331 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
333 TestStartStop/group/newest-cni/serial/DeployApp 0
334 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.53
336 TestStartStop/group/no-preload/serial/DeployApp 8.98
337 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.42
338 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
340 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.15
343 TestStartStop/group/old-k8s-version/serial/SecondStart 792.6
345 TestStartStop/group/newest-cni/serial/SecondStart 311.1
348 TestStartStop/group/no-preload/serial/SecondStart 629.02
349 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 600.3
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.37
353 TestStartStop/group/newest-cni/serial/Pause 2.92
355 TestStartStop/group/embed-certs/serial/FirstStart 138.09
356 TestStartStop/group/embed-certs/serial/DeployApp 9.47
357 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.26
360 TestStartStop/group/embed-certs/serial/SecondStart 629.85
x
+
TestDownloadOnly/v1.16.0/json-events (7.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-780173 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-780173 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.738843251s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (7.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-780173
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-780173: exit status 85 (76.589094ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-780173 | jenkins | v1.32.0 | 28 Nov 23 02:40 UTC |          |
	|         | -p download-only-780173        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 02:40:52
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 02:40:52.934175  340526 out.go:296] Setting OutFile to fd 1 ...
	I1128 02:40:52.934329  340526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 02:40:52.934338  340526 out.go:309] Setting ErrFile to fd 2...
	I1128 02:40:52.934343  340526 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 02:40:52.934528  340526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	W1128 02:40:52.934662  340526 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17671-333305/.minikube/config/config.json: open /home/jenkins/minikube-integration/17671-333305/.minikube/config/config.json: no such file or directory
	I1128 02:40:52.935238  340526 out.go:303] Setting JSON to true
	I1128 02:40:52.936832  340526 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5003,"bootTime":1701134250,"procs":947,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 02:40:52.936928  340526 start.go:138] virtualization: kvm guest
	I1128 02:40:52.939470  340526 out.go:97] [download-only-780173] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 02:40:52.941069  340526 out.go:169] MINIKUBE_LOCATION=17671
	W1128 02:40:52.939567  340526 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball: no such file or directory
	I1128 02:40:52.939600  340526 notify.go:220] Checking for updates...
	I1128 02:40:52.943915  340526 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 02:40:52.945484  340526 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 02:40:52.946923  340526 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 02:40:52.948403  340526 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1128 02:40:52.950964  340526 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1128 02:40:52.951220  340526 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 02:40:52.983520  340526 out.go:97] Using the kvm2 driver based on user configuration
	I1128 02:40:52.983561  340526 start.go:298] selected driver: kvm2
	I1128 02:40:52.983575  340526 start.go:902] validating driver "kvm2" against <nil>
	I1128 02:40:52.984060  340526 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 02:40:52.984220  340526 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 02:40:52.999223  340526 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 02:40:52.999306  340526 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1128 02:40:52.999797  340526 start_flags.go:394] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1128 02:40:52.999933  340526 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1128 02:40:52.999991  340526 cni.go:84] Creating CNI manager for ""
	I1128 02:40:53.000008  340526 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 02:40:53.000018  340526 start_flags.go:318] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1128 02:40:53.000024  340526 start_flags.go:323] config:
	{Name:download-only-780173 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-780173 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 02:40:53.000227  340526 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 02:40:53.002225  340526 out.go:97] Downloading VM boot image ...
	I1128 02:40:53.002271  340526 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/iso/amd64/minikube-v1.32.1-1700142131-17634-amd64.iso
	I1128 02:40:55.763012  340526 out.go:97] Starting control plane node download-only-780173 in cluster download-only-780173
	I1128 02:40:55.763050  340526 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1128 02:40:55.792695  340526 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I1128 02:40:55.792771  340526 cache.go:56] Caching tarball of preloaded images
	I1128 02:40:55.792974  340526 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I1128 02:40:55.794886  340526 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1128 02:40:55.794909  340526 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I1128 02:40:55.830911  340526 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-780173"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (6.32s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-780173 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-780173 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (6.315467265s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (6.32s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-780173
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-780173: exit status 85 (78.620016ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-780173 | jenkins | v1.32.0 | 28 Nov 23 02:40 UTC |          |
	|         | -p download-only-780173        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-780173 | jenkins | v1.32.0 | 28 Nov 23 02:41 UTC |          |
	|         | -p download-only-780173        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 02:41:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 02:41:00.750558  340582 out.go:296] Setting OutFile to fd 1 ...
	I1128 02:41:00.750735  340582 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 02:41:00.750746  340582 out.go:309] Setting ErrFile to fd 2...
	I1128 02:41:00.750753  340582 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 02:41:00.750981  340582 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	W1128 02:41:00.751118  340582 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17671-333305/.minikube/config/config.json: open /home/jenkins/minikube-integration/17671-333305/.minikube/config/config.json: no such file or directory
	I1128 02:41:00.751571  340582 out.go:303] Setting JSON to true
	I1128 02:41:00.753191  340582 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5011,"bootTime":1701134250,"procs":943,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 02:41:00.753261  340582 start.go:138] virtualization: kvm guest
	I1128 02:41:00.755643  340582 out.go:97] [download-only-780173] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 02:41:00.757289  340582 out.go:169] MINIKUBE_LOCATION=17671
	I1128 02:41:00.755800  340582 notify.go:220] Checking for updates...
	I1128 02:41:00.760158  340582 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 02:41:00.761564  340582 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 02:41:00.762901  340582 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 02:41:00.764199  340582 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1128 02:41:00.766546  340582 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1128 02:41:00.767031  340582 config.go:182] Loaded profile config "download-only-780173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	W1128 02:41:00.767089  340582 start.go:810] api.Load failed for download-only-780173: filestore "download-only-780173": Docker machine "download-only-780173" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1128 02:41:00.767179  340582 driver.go:378] Setting default libvirt URI to qemu:///system
	W1128 02:41:00.767224  340582 start.go:810] api.Load failed for download-only-780173: filestore "download-only-780173": Docker machine "download-only-780173" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1128 02:41:00.798192  340582 out.go:97] Using the kvm2 driver based on existing profile
	I1128 02:41:00.798224  340582 start.go:298] selected driver: kvm2
	I1128 02:41:00.798231  340582 start.go:902] validating driver "kvm2" against &{Name:download-only-780173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.16.0 ClusterName:download-only-780173 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 02:41:00.798641  340582 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 02:41:00.798748  340582 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 02:41:00.813493  340582 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 02:41:00.814621  340582 cni.go:84] Creating CNI manager for ""
	I1128 02:41:00.814641  340582 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 02:41:00.814659  340582 start_flags.go:323] config:
	{Name:download-only-780173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-780173 Namespace:defa
ult APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socke
tVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 02:41:00.814879  340582 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 02:41:00.816914  340582 out.go:97] Starting control plane node download-only-780173 in cluster download-only-780173
	I1128 02:41:00.816930  340582 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 02:41:00.853527  340582 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 02:41:00.853550  340582 cache.go:56] Caching tarball of preloaded images
	I1128 02:41:00.853691  340582 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I1128 02:41:00.855564  340582 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I1128 02:41:00.855581  340582 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1128 02:41:00.888064  340582 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I1128 02:41:05.274981  340582 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I1128 02:41:05.275082  340582 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-780173"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/json-events (5.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-780173 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-780173 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.751964238s)
--- PASS: TestDownloadOnly/v1.29.0-rc.0/json-events (5.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/LogsDuration
aaa_download_only_test.go:172: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-780173
aaa_download_only_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-780173: exit status 85 (78.170809ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only           | download-only-780173 | jenkins | v1.32.0 | 28 Nov 23 02:40 UTC |          |
	|         | -p download-only-780173           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-780173 | jenkins | v1.32.0 | 28 Nov 23 02:41 UTC |          |
	|         | -p download-only-780173           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	| start   | -o=json --download-only           | download-only-780173 | jenkins | v1.32.0 | 28 Nov 23 02:41 UTC |          |
	|         | -p download-only-780173           |                      |         |         |                     |          |
	|         | --force --alsologtostderr         |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.29.0-rc.0 |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|         | --driver=kvm2                     |                      |         |         |                     |          |
	|         | --container-runtime=crio          |                      |         |         |                     |          |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/11/28 02:41:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1128 02:41:07.146354  340627 out.go:296] Setting OutFile to fd 1 ...
	I1128 02:41:07.146637  340627 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 02:41:07.146647  340627 out.go:309] Setting ErrFile to fd 2...
	I1128 02:41:07.146654  340627 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 02:41:07.146876  340627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	W1128 02:41:07.146996  340627 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17671-333305/.minikube/config/config.json: open /home/jenkins/minikube-integration/17671-333305/.minikube/config/config.json: no such file or directory
	I1128 02:41:07.147438  340627 out.go:303] Setting JSON to true
	I1128 02:41:07.148979  340627 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5017,"bootTime":1701134250,"procs":943,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 02:41:07.149050  340627 start.go:138] virtualization: kvm guest
	I1128 02:41:07.151184  340627 out.go:97] [download-only-780173] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 02:41:07.152748  340627 out.go:169] MINIKUBE_LOCATION=17671
	I1128 02:41:07.151362  340627 notify.go:220] Checking for updates...
	I1128 02:41:07.155599  340627 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 02:41:07.156971  340627 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 02:41:07.158127  340627 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 02:41:07.159372  340627 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1128 02:41:07.161911  340627 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1128 02:41:07.162393  340627 config.go:182] Loaded profile config "download-only-780173": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	W1128 02:41:07.162444  340627 start.go:810] api.Load failed for download-only-780173: filestore "download-only-780173": Docker machine "download-only-780173" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1128 02:41:07.162530  340627 driver.go:378] Setting default libvirt URI to qemu:///system
	W1128 02:41:07.162566  340627 start.go:810] api.Load failed for download-only-780173: filestore "download-only-780173": Docker machine "download-only-780173" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1128 02:41:07.193542  340627 out.go:97] Using the kvm2 driver based on existing profile
	I1128 02:41:07.193572  340627 start.go:298] selected driver: kvm2
	I1128 02:41:07.193577  340627 start.go:902] validating driver "kvm2" against &{Name:download-only-780173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.4 ClusterName:download-only-780173 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:
DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 02:41:07.193949  340627 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 02:41:07.194013  340627 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17671-333305/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1128 02:41:07.208495  340627 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I1128 02:41:07.209560  340627 cni.go:84] Creating CNI manager for ""
	I1128 02:41:07.209587  340627 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1128 02:41:07.209602  340627 start_flags.go:323] config:
	{Name:download-only-780173 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.0 ClusterName:download-only-780173 Namespace
:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 02:41:07.209754  340627 iso.go:125] acquiring lock: {Name:mkcf6be5530b10e35c21f89bc9951985b3471b6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1128 02:41:07.211428  340627 out.go:97] Starting control plane node download-only-780173 in cluster download-only-780173
	I1128 02:41:07.211441  340627 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1128 02:41:07.244002  340627 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.0/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I1128 02:41:07.244042  340627 cache.go:56] Caching tarball of preloaded images
	I1128 02:41:07.244251  340627 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1128 02:41:07.246123  340627 out.go:97] Downloading Kubernetes v1.29.0-rc.0 preload ...
	I1128 02:41:07.246142  340627 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I1128 02:41:07.282876  340627 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.0/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:5686edee2f3c2c02d5f5e95cbdafe8b5 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4
	I1128 02:41:10.752987  340627 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I1128 02:41:10.753080  340627 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17671-333305/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.0-cri-o-overlay-amd64.tar.lz4 ...
	I1128 02:41:11.569173  340627 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.0 on crio
	I1128 02:41:11.569326  340627 profile.go:148] Saving config to /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/download-only-780173/config.json ...
	I1128 02:41:11.569541  340627 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.0 and runtime crio
	I1128 02:41:11.569724  340627 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17671-333305/.minikube/cache/linux/amd64/v1.29.0-rc.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-780173"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:173: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:190: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:202: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-780173
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:307: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-179554 --alsologtostderr --binary-mirror http://127.0.0.1:40849 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-179554" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-179554
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (104.58s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-428381 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-428381 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m43.354527423s)
helpers_test.go:175: Cleaning up "offline-crio-428381" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-428381
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-428381: (1.224350541s)
--- PASS: TestOffline (104.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:927: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-681229
addons_test.go:927: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-681229: exit status 85 (66.831564ms)

                                                
                                                
-- stdout --
	* Profile "addons-681229" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-681229"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:938: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-681229
addons_test.go:938: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-681229: exit status 85 (66.155627ms)

                                                
                                                
-- stdout --
	* Profile "addons-681229" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-681229"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (149.65s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-681229 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-681229 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m29.653811474s)
--- PASS: TestAddons/Setup (149.65s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:329: registry stabilized in 33.25069ms
addons_test.go:331: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-k72qb" [ab234015-31f7-499a-9928-a0ad70000068] Running
addons_test.go:331: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.019641352s
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-h9tnv" [f745a55b-172d-43c2-a850-12753f22f47a] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015288554s
addons_test.go:339: (dbg) Run:  kubectl --context addons-681229 delete po -l run=registry-test --now
addons_test.go:344: (dbg) Run:  kubectl --context addons-681229 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:344: (dbg) Done: kubectl --context addons-681229 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.912603578s)
addons_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p addons-681229 ip
2023/11/28 02:43:57 [DEBUG] GET http://192.168.39.100:5000
addons_test.go:387: (dbg) Run:  out/minikube-linux-amd64 -p addons-681229 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.80s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.07s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-4s6mg" [124f70fc-34f0-457c-88b8-0ca2564409e2] Running
addons_test.go:837: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.014898048s
addons_test.go:840: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-681229
addons_test.go:840: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-681229: (6.049377119s)
--- PASS: TestAddons/parallel/InspektorGadget (11.07s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.95s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:406: metrics-server stabilized in 33.256245ms
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-fdxck" [f314eebf-cb93-487b-ad2b-9dff2d03acb1] Running
addons_test.go:408: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.02172897s
addons_test.go:414: (dbg) Run:  kubectl --context addons-681229 top pods -n kube-system
addons_test.go:431: (dbg) Run:  out/minikube-linux-amd64 -p addons-681229 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.95s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.9s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:455: tiller-deploy stabilized in 4.067021ms
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-mphnj" [166abcd3-ff81-472d-b1e6-c0aad1a85f5b] Running
addons_test.go:457: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.014015614s
addons_test.go:472: (dbg) Run:  kubectl --context addons-681229 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:472: (dbg) Done: kubectl --context addons-681229 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.217902658s)
addons_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p addons-681229 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.90s)

                                                
                                    
x
+
TestAddons/parallel/CSI (96.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:560: csi-hostpath-driver pods stabilized in 33.95114ms
addons_test.go:563: (dbg) Run:  kubectl --context addons-681229 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-681229 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:578: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [330c6fa9-52c4-42c4-ade4-87237226f151] Pending
helpers_test.go:344: "task-pv-pod" [330c6fa9-52c4-42c4-ade4-87237226f151] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [330c6fa9-52c4-42c4-ade4-87237226f151] Running
addons_test.go:578: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.021733777s
addons_test.go:583: (dbg) Run:  kubectl --context addons-681229 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:588: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-681229 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-681229 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-681229 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:593: (dbg) Run:  kubectl --context addons-681229 delete pod task-pv-pod
addons_test.go:599: (dbg) Run:  kubectl --context addons-681229 delete pvc hpvc
addons_test.go:605: (dbg) Run:  kubectl --context addons-681229 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:610: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:615: (dbg) Run:  kubectl --context addons-681229 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:620: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [a46ecf40-3e07-4c75-b2b9-8b00a32c69d2] Pending
helpers_test.go:344: "task-pv-pod-restore" [a46ecf40-3e07-4c75-b2b9-8b00a32c69d2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [a46ecf40-3e07-4c75-b2b9-8b00a32c69d2] Running
addons_test.go:620: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.020627263s
addons_test.go:625: (dbg) Run:  kubectl --context addons-681229 delete pod task-pv-pod-restore
addons_test.go:629: (dbg) Run:  kubectl --context addons-681229 delete pvc hpvc-restore
addons_test.go:633: (dbg) Run:  kubectl --context addons-681229 delete volumesnapshot new-snapshot-demo
addons_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p addons-681229 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:637: (dbg) Done: out/minikube-linux-amd64 -p addons-681229 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.81963579s)
addons_test.go:641: (dbg) Run:  out/minikube-linux-amd64 -p addons-681229 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (96.26s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:823: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-681229 --alsologtostderr -v=1
addons_test.go:823: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-681229 --alsologtostderr -v=1: (1.312257247s)
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-777fd4b855-x8dgt" [239d7f4a-b099-4dd4-9010-b78d4265aa47] Pending
helpers_test.go:344: "headlamp-777fd4b855-x8dgt" [239d7f4a-b099-4dd4-9010-b78d4265aa47] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-777fd4b855-x8dgt" [239d7f4a-b099-4dd4-9010-b78d4265aa47] Running
addons_test.go:828: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.039180573s
--- PASS: TestAddons/parallel/Headlamp (14.35s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5649c69bf6-wfj6k" [54f2a435-d2b4-4066-ad00-3fb7a0a3183a] Running
addons_test.go:856: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.010315513s
addons_test.go:859: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-681229
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.98s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:872: (dbg) Run:  kubectl --context addons-681229 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:878: (dbg) Run:  kubectl --context addons-681229 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:882: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-681229 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9117210c-5cb8-45c0-ac44-6343f0a8a70e] Pending
helpers_test.go:344: "test-local-path" [9117210c-5cb8-45c0-ac44-6343f0a8a70e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9117210c-5cb8-45c0-ac44-6343f0a8a70e] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9117210c-5cb8-45c0-ac44-6343f0a8a70e] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:885: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.017362084s
addons_test.go:890: (dbg) Run:  kubectl --context addons-681229 get pvc test-pvc -o=json
addons_test.go:899: (dbg) Run:  out/minikube-linux-amd64 -p addons-681229 ssh "cat /opt/local-path-provisioner/pvc-b94d0112-df69-4553-9f14-bebd2794b54c_default_test-pvc/file1"
addons_test.go:911: (dbg) Run:  kubectl --context addons-681229 delete pod test-local-path
addons_test.go:915: (dbg) Run:  kubectl --context addons-681229 delete pvc test-pvc
addons_test.go:919: (dbg) Run:  out/minikube-linux-amd64 -p addons-681229 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:919: (dbg) Done: out/minikube-linux-amd64 -p addons-681229 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.267675072s)
--- PASS: TestAddons/parallel/LocalPath (55.98s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-bp85w" [8f81dd73-e882-4dc9-bd65-972a34309eed] Running
addons_test.go:951: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.01518681s
addons_test.go:954: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-681229
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.55s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:649: (dbg) Run:  kubectl --context addons-681229 create ns new-namespace
addons_test.go:663: (dbg) Run:  kubectl --context addons-681229 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (75.92s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-140182 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-140182 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m14.271381629s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-140182 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-140182 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-140182 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-140182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-140182
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-140182: (1.090828024s)
--- PASS: TestCertOptions (75.92s)

                                                
                                    
x
+
TestCertExpiration (296.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-456035 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E1128 03:38:34.223060  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-456035 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m23.093300352s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-456035 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-456035 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (32.287767169s)
helpers_test.go:175: Cleaning up "cert-expiration-456035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-456035
--- PASS: TestCertExpiration (296.23s)

                                                
                                    
x
+
TestForceSystemdFlag (60.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-708522 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-708522 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (59.686418694s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-708522 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-708522" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-708522
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-708522: (1.040119246s)
--- PASS: TestForceSystemdFlag (60.95s)

                                                
                                    
x
+
TestForceSystemdEnv (92.52s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-814013 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-814013 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m31.481136776s)
helpers_test.go:175: Cleaning up "force-systemd-env-814013" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-814013
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-814013: (1.035153181s)
--- PASS: TestForceSystemdEnv (92.52s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.00s)

                                                
                                    
x
+
TestErrorSpam/setup (49.03s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-077195 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-077195 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-077195 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-077195 --driver=kvm2  --container-runtime=crio: (49.026308721s)
--- PASS: TestErrorSpam/setup (49.03s)

                                                
                                    
x
+
TestErrorSpam/start (0.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 start --dry-run
--- PASS: TestErrorSpam/start (0.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 status
--- PASS: TestErrorSpam/status (0.81s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (2.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 stop: (2.101849861s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-077195 --log_dir /tmp/nospam-077195 stop
--- PASS: TestErrorSpam/stop (2.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17671-333305/.minikube/files/etc/test/nested/copy/340515/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (97.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-068418 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-068418 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m37.384429235s)
--- PASS: TestFunctional/serial/StartWithProxy (97.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.67s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-068418 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-068418 --alsologtostderr -v=8: (39.665838458s)
functional_test.go:659: soft start took 39.666831691s for "functional-068418" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.67s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-068418 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 cache add registry.k8s.io/pause:3.1: (1.039576688s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 cache add registry.k8s.io/pause:3.3: (1.096479545s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 cache add registry.k8s.io/pause:latest: (1.133416876s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-068418 /tmp/TestFunctionalserialCacheCmdcacheadd_local626351761/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 cache add minikube-local-cache-test:functional-068418
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 cache add minikube-local-cache-test:functional-068418: (1.233730169s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 cache delete minikube-local-cache-test:functional-068418
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-068418
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068418 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (241.259637ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 cache reload: (1.001023101s)
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 kubectl -- --context functional-068418 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-068418 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-068418 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-068418 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.273701217s)
functional_test.go:757: restart took 35.273869452s for "functional-068418" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-068418 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 logs: (1.474928035s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 logs --file /tmp/TestFunctionalserialLogsFileCmd2272041418/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 logs --file /tmp/TestFunctionalserialLogsFileCmd2272041418/001/logs.txt: (1.578971986s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.81s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-068418 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-068418
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-068418: exit status 115 (310.514793ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.18:30986 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-068418 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-068418 delete -f testdata/invalidsvc.yaml: (1.178434521s)
--- PASS: TestFunctional/serial/InvalidService (4.81s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068418 config get cpus: exit status 14 (67.189254ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068418 config get cpus: exit status 14 (78.261302ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-068418 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-068418 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 348369: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.65s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-068418 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-068418 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (149.335606ms)

                                                
                                                
-- stdout --
	* [functional-068418] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 02:54:07.669081  348241 out.go:296] Setting OutFile to fd 1 ...
	I1128 02:54:07.669286  348241 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 02:54:07.669297  348241 out.go:309] Setting ErrFile to fd 2...
	I1128 02:54:07.669304  348241 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 02:54:07.669511  348241 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 02:54:07.670105  348241 out.go:303] Setting JSON to false
	I1128 02:54:07.671074  348241 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5798,"bootTime":1701134250,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 02:54:07.671142  348241 start.go:138] virtualization: kvm guest
	I1128 02:54:07.673487  348241 out.go:177] * [functional-068418] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 02:54:07.675143  348241 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 02:54:07.675093  348241 notify.go:220] Checking for updates...
	I1128 02:54:07.676784  348241 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 02:54:07.678399  348241 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 02:54:07.679904  348241 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 02:54:07.681344  348241 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 02:54:07.682753  348241 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 02:54:07.685042  348241 config.go:182] Loaded profile config "functional-068418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 02:54:07.685510  348241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:54:07.685574  348241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:54:07.701074  348241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43627
	I1128 02:54:07.701574  348241 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:54:07.702142  348241 main.go:141] libmachine: Using API Version  1
	I1128 02:54:07.702163  348241 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:54:07.702516  348241 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:54:07.702710  348241 main.go:141] libmachine: (functional-068418) Calling .DriverName
	I1128 02:54:07.703006  348241 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 02:54:07.703463  348241 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:54:07.703517  348241 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:54:07.717741  348241 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38063
	I1128 02:54:07.718169  348241 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:54:07.718628  348241 main.go:141] libmachine: Using API Version  1
	I1128 02:54:07.718658  348241 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:54:07.718952  348241 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:54:07.719225  348241 main.go:141] libmachine: (functional-068418) Calling .DriverName
	I1128 02:54:07.750843  348241 out.go:177] * Using the kvm2 driver based on existing profile
	I1128 02:54:07.752302  348241 start.go:298] selected driver: kvm2
	I1128 02:54:07.752320  348241 start.go:902] validating driver "kvm2" against &{Name:functional-068418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-068418 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.18 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 02:54:07.752413  348241 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 02:54:07.754657  348241 out.go:177] 
	W1128 02:54:07.755965  348241 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1128 02:54:07.757323  348241 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-068418 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-068418 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-068418 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (152.360532ms)

                                                
                                                
-- stdout --
	* [functional-068418] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 02:54:07.962015  348295 out.go:296] Setting OutFile to fd 1 ...
	I1128 02:54:07.962158  348295 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 02:54:07.962166  348295 out.go:309] Setting ErrFile to fd 2...
	I1128 02:54:07.962171  348295 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 02:54:07.962451  348295 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 02:54:07.962989  348295 out.go:303] Setting JSON to false
	I1128 02:54:07.963890  348295 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5798,"bootTime":1701134250,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 02:54:07.963951  348295 start.go:138] virtualization: kvm guest
	I1128 02:54:07.966355  348295 out.go:177] * [functional-068418] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I1128 02:54:07.967846  348295 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 02:54:07.967878  348295 notify.go:220] Checking for updates...
	I1128 02:54:07.969152  348295 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 02:54:07.970703  348295 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 02:54:07.972236  348295 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 02:54:07.973619  348295 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 02:54:07.975171  348295 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 02:54:07.977331  348295 config.go:182] Loaded profile config "functional-068418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 02:54:07.977858  348295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:54:07.977912  348295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:54:07.992600  348295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41711
	I1128 02:54:07.993006  348295 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:54:07.993665  348295 main.go:141] libmachine: Using API Version  1
	I1128 02:54:07.993694  348295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:54:07.994039  348295 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:54:07.994211  348295 main.go:141] libmachine: (functional-068418) Calling .DriverName
	I1128 02:54:07.994452  348295 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 02:54:07.994744  348295 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 02:54:07.994780  348295 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 02:54:08.010741  348295 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43309
	I1128 02:54:08.011130  348295 main.go:141] libmachine: () Calling .GetVersion
	I1128 02:54:08.011584  348295 main.go:141] libmachine: Using API Version  1
	I1128 02:54:08.011607  348295 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 02:54:08.011900  348295 main.go:141] libmachine: () Calling .GetMachineName
	I1128 02:54:08.012112  348295 main.go:141] libmachine: (functional-068418) Calling .DriverName
	I1128 02:54:08.047552  348295 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1128 02:54:08.049033  348295 start.go:298] selected driver: kvm2
	I1128 02:54:08.049058  348295 start.go:902] validating driver "kvm2" against &{Name:functional-068418 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17634/minikube-v1.32.1-1700142131-17634-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1700142204-17634@sha256:b5ff7180d8eca5924b7e763cf222f5d9cfa39b21ab2c921f1394f3275e214b50 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-068418 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.18 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1128 02:54:08.049199  348295 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 02:54:08.051261  348295 out.go:177] 
	W1128 02:54:08.052499  348295 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1128 02:54:08.053829  348295 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-068418 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-068418 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-vjs4f" [b3cd98aa-05f5-4094-b632-05e2ccca2f9e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-vjs4f" [b3cd98aa-05f5-4094-b632-05e2ccca2f9e] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.033265496s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.18:31820
functional_test.go:1674: http://192.168.39.18:31820: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-vjs4f

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.18:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.18:31820
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fc96fe46-0c49-4bc8-8cfb-3a1003047804] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.018050887s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-068418 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-068418 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-068418 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-068418 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-068418 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [9600759c-c615-4c86-8f8f-4f80cfa6678d] Pending
helpers_test.go:344: "sp-pod" [9600759c-c615-4c86-8f8f-4f80cfa6678d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [9600759c-c615-4c86-8f8f-4f80cfa6678d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.025449677s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-068418 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-068418 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-068418 delete -f testdata/storage-provisioner/pod.yaml: (2.906637615s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-068418 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [617ffd6c-a905-4519-be04-a6af39cae29c] Pending
helpers_test.go:344: "sp-pod" [617ffd6c-a905-4519-be04-a6af39cae29c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E1128 02:54:04.155823  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [617ffd6c-a905-4519-be04-a6af39cae29c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.042991094s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-068418 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.64s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh -n functional-068418 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 cp functional-068418:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3440143405/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh -n functional-068418 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-068418 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-xtvvx" [0b65191b-efe7-467b-bef7-9baebf80f88f] Pending
helpers_test.go:344: "mysql-859648c796-xtvvx" [0b65191b-efe7-467b-bef7-9baebf80f88f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-xtvvx" [0b65191b-efe7-467b-bef7-9baebf80f88f] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 26.039620709s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-068418 exec mysql-859648c796-xtvvx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-068418 exec mysql-859648c796-xtvvx -- mysql -ppassword -e "show databases;": exit status 1 (244.186131ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-068418 exec mysql-859648c796-xtvvx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-068418 exec mysql-859648c796-xtvvx -- mysql -ppassword -e "show databases;": exit status 1 (239.785674ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-068418 exec mysql-859648c796-xtvvx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.80s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/340515/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "sudo cat /etc/test/nested/copy/340515/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/340515.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "sudo cat /etc/ssl/certs/340515.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/340515.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "sudo cat /usr/share/ca-certificates/340515.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3405152.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "sudo cat /etc/ssl/certs/3405152.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/3405152.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "sudo cat /usr/share/ca-certificates/3405152.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-068418 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068418 ssh "sudo systemctl is-active docker": exit status 1 (266.609468ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068418 ssh "sudo systemctl is-active containerd": exit status 1 (255.203688ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 version -o=json --components: (1.008240335s)
--- PASS: TestFunctional/parallel/Version/components (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-068418 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-068418
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-068418
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-068418 image ls --format short --alsologtostderr:
I1128 02:54:16.212450  348960 out.go:296] Setting OutFile to fd 1 ...
I1128 02:54:16.212678  348960 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 02:54:16.212693  348960 out.go:309] Setting ErrFile to fd 2...
I1128 02:54:16.212700  348960 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 02:54:16.212972  348960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
I1128 02:54:16.213609  348960 config.go:182] Loaded profile config "functional-068418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 02:54:16.213709  348960 config.go:182] Loaded profile config "functional-068418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 02:54:16.214060  348960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1128 02:54:16.214115  348960 main.go:141] libmachine: Launching plugin server for driver kvm2
I1128 02:54:16.229523  348960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41329
I1128 02:54:16.230022  348960 main.go:141] libmachine: () Calling .GetVersion
I1128 02:54:16.230791  348960 main.go:141] libmachine: Using API Version  1
I1128 02:54:16.230824  348960 main.go:141] libmachine: () Calling .SetConfigRaw
I1128 02:54:16.231286  348960 main.go:141] libmachine: () Calling .GetMachineName
I1128 02:54:16.231496  348960 main.go:141] libmachine: (functional-068418) Calling .GetState
I1128 02:54:16.233433  348960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1128 02:54:16.233478  348960 main.go:141] libmachine: Launching plugin server for driver kvm2
I1128 02:54:16.250303  348960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45969
I1128 02:54:16.250824  348960 main.go:141] libmachine: () Calling .GetVersion
I1128 02:54:16.251412  348960 main.go:141] libmachine: Using API Version  1
I1128 02:54:16.251452  348960 main.go:141] libmachine: () Calling .SetConfigRaw
I1128 02:54:16.251897  348960 main.go:141] libmachine: () Calling .GetMachineName
I1128 02:54:16.252111  348960 main.go:141] libmachine: (functional-068418) Calling .DriverName
I1128 02:54:16.252371  348960 ssh_runner.go:195] Run: systemctl --version
I1128 02:54:16.252419  348960 main.go:141] libmachine: (functional-068418) Calling .GetSSHHostname
I1128 02:54:16.255486  348960 main.go:141] libmachine: (functional-068418) DBG | domain functional-068418 has defined MAC address 52:54:00:0c:2c:67 in network mk-functional-068418
I1128 02:54:16.256028  348960 main.go:141] libmachine: (functional-068418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:2c:67", ip: ""} in network mk-functional-068418: {Iface:virbr1 ExpiryTime:2023-11-28 03:50:41 +0000 UTC Type:0 Mac:52:54:00:0c:2c:67 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:functional-068418 Clientid:01:52:54:00:0c:2c:67}
I1128 02:54:16.256060  348960 main.go:141] libmachine: (functional-068418) DBG | domain functional-068418 has defined IP address 192.168.39.18 and MAC address 52:54:00:0c:2c:67 in network mk-functional-068418
I1128 02:54:16.256330  348960 main.go:141] libmachine: (functional-068418) Calling .GetSSHPort
I1128 02:54:16.256534  348960 main.go:141] libmachine: (functional-068418) Calling .GetSSHKeyPath
I1128 02:54:16.256664  348960 main.go:141] libmachine: (functional-068418) Calling .GetSSHUsername
I1128 02:54:16.256823  348960 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/functional-068418/id_rsa Username:docker}
I1128 02:54:16.448946  348960 ssh_runner.go:195] Run: sudo crictl images --output json
I1128 02:54:16.570616  348960 main.go:141] libmachine: Making call to close driver server
I1128 02:54:16.570641  348960 main.go:141] libmachine: (functional-068418) Calling .Close
I1128 02:54:16.572970  348960 main.go:141] libmachine: (functional-068418) DBG | Closing plugin on server side
I1128 02:54:16.572987  348960 main.go:141] libmachine: Successfully made call to close driver server
I1128 02:54:16.573007  348960 main.go:141] libmachine: Making call to close connection to plugin binary
I1128 02:54:16.573024  348960 main.go:141] libmachine: Making call to close driver server
I1128 02:54:16.573034  348960 main.go:141] libmachine: (functional-068418) Calling .Close
I1128 02:54:16.573304  348960 main.go:141] libmachine: Successfully made call to close driver server
I1128 02:54:16.573322  348960 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-068418 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| docker.io/library/nginx                 | latest             | a6bd71f48f683 | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-068418  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-068418  | a78013498af8f | 3.35kB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| docker.io/library/mysql                 | 5.7                | bdba757bc9336 | 520MB  |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-068418 image ls --format table --alsologtostderr:
I1128 02:54:16.961627  349087 out.go:296] Setting OutFile to fd 1 ...
I1128 02:54:16.961902  349087 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 02:54:16.961913  349087 out.go:309] Setting ErrFile to fd 2...
I1128 02:54:16.961918  349087 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 02:54:16.962095  349087 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
I1128 02:54:16.962718  349087 config.go:182] Loaded profile config "functional-068418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 02:54:16.962841  349087 config.go:182] Loaded profile config "functional-068418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 02:54:16.963303  349087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1128 02:54:16.963373  349087 main.go:141] libmachine: Launching plugin server for driver kvm2
I1128 02:54:16.978474  349087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39265
I1128 02:54:16.978973  349087 main.go:141] libmachine: () Calling .GetVersion
I1128 02:54:16.979654  349087 main.go:141] libmachine: Using API Version  1
I1128 02:54:16.979673  349087 main.go:141] libmachine: () Calling .SetConfigRaw
I1128 02:54:16.980068  349087 main.go:141] libmachine: () Calling .GetMachineName
I1128 02:54:16.980308  349087 main.go:141] libmachine: (functional-068418) Calling .GetState
I1128 02:54:16.982157  349087 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1128 02:54:16.982210  349087 main.go:141] libmachine: Launching plugin server for driver kvm2
I1128 02:54:16.997338  349087 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40055
I1128 02:54:16.997858  349087 main.go:141] libmachine: () Calling .GetVersion
I1128 02:54:16.998652  349087 main.go:141] libmachine: Using API Version  1
I1128 02:54:16.998684  349087 main.go:141] libmachine: () Calling .SetConfigRaw
I1128 02:54:16.999053  349087 main.go:141] libmachine: () Calling .GetMachineName
I1128 02:54:16.999272  349087 main.go:141] libmachine: (functional-068418) Calling .DriverName
I1128 02:54:16.999507  349087 ssh_runner.go:195] Run: systemctl --version
I1128 02:54:16.999539  349087 main.go:141] libmachine: (functional-068418) Calling .GetSSHHostname
I1128 02:54:17.002888  349087 main.go:141] libmachine: (functional-068418) DBG | domain functional-068418 has defined MAC address 52:54:00:0c:2c:67 in network mk-functional-068418
I1128 02:54:17.003413  349087 main.go:141] libmachine: (functional-068418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:2c:67", ip: ""} in network mk-functional-068418: {Iface:virbr1 ExpiryTime:2023-11-28 03:50:41 +0000 UTC Type:0 Mac:52:54:00:0c:2c:67 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:functional-068418 Clientid:01:52:54:00:0c:2c:67}
I1128 02:54:17.003437  349087 main.go:141] libmachine: (functional-068418) DBG | domain functional-068418 has defined IP address 192.168.39.18 and MAC address 52:54:00:0c:2c:67 in network mk-functional-068418
I1128 02:54:17.003553  349087 main.go:141] libmachine: (functional-068418) Calling .GetSSHPort
I1128 02:54:17.003714  349087 main.go:141] libmachine: (functional-068418) Calling .GetSSHKeyPath
I1128 02:54:17.003889  349087 main.go:141] libmachine: (functional-068418) Calling .GetSSHUsername
I1128 02:54:17.004012  349087 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/functional-068418/id_rsa Username:docker}
I1128 02:54:17.173891  349087 ssh_runner.go:195] Run: sudo crictl images --output json
I1128 02:54:17.325708  349087 main.go:141] libmachine: Making call to close driver server
I1128 02:54:17.325731  349087 main.go:141] libmachine: (functional-068418) Calling .Close
I1128 02:54:17.326058  349087 main.go:141] libmachine: (functional-068418) DBG | Closing plugin on server side
I1128 02:54:17.326095  349087 main.go:141] libmachine: Successfully made call to close driver server
I1128 02:54:17.326107  349087 main.go:141] libmachine: Making call to close connection to plugin binary
I1128 02:54:17.326117  349087 main.go:141] libmachine: Making call to close driver server
I1128 02:54:17.326145  349087 main.go:141] libmachine: (functional-068418) Calling .Close
I1128 02:54:17.326498  349087 main.go:141] libmachine: Successfully made call to close driver server
I1128 02:54:17.326521  349087 main.go:141] libmachine: Making call to close connection to plugin binary
I1128 02:54:17.326647  349087 main.go:141] libmachine: (functional-068418) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-068418 image ls --format json --alsologtostderr:
[{"id":"bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c","repoDigests":["docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3","docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519653829"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTa
gs":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866","repoDigests":["docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee","docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7"],"repoTags":["docker.io/library/nginx:latest"],"size":"190960382"},{"id":"a78013498af8f6db8281a75f211dde194f31513bff8789bb78f010d9e70db86f","repoDigests":["localhost/minikube-
local-cache-test@sha256:98c1ca5220d2642a0ebdc1d99da6ca10beb9961f7470a6b2eaa0cc9111b2ec9c"],"repoTags":["localhost/minikube-local-cache-test:functional-068418"],"size":"3345"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175
d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-068418"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762d
a6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9
d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-068418 image ls --format json --alsologtostderr:
I1128 02:54:16.652428  349015 out.go:296] Setting OutFile to fd 1 ...
I1128 02:54:16.652646  349015 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 02:54:16.652678  349015 out.go:309] Setting ErrFile to fd 2...
I1128 02:54:16.652695  349015 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 02:54:16.652989  349015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
I1128 02:54:16.653743  349015 config.go:182] Loaded profile config "functional-068418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 02:54:16.653908  349015 config.go:182] Loaded profile config "functional-068418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 02:54:16.654332  349015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1128 02:54:16.654410  349015 main.go:141] libmachine: Launching plugin server for driver kvm2
I1128 02:54:16.669480  349015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36157
I1128 02:54:16.669972  349015 main.go:141] libmachine: () Calling .GetVersion
I1128 02:54:16.670578  349015 main.go:141] libmachine: Using API Version  1
I1128 02:54:16.670633  349015 main.go:141] libmachine: () Calling .SetConfigRaw
I1128 02:54:16.670967  349015 main.go:141] libmachine: () Calling .GetMachineName
I1128 02:54:16.671176  349015 main.go:141] libmachine: (functional-068418) Calling .GetState
I1128 02:54:16.673173  349015 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1128 02:54:16.673227  349015 main.go:141] libmachine: Launching plugin server for driver kvm2
I1128 02:54:16.688237  349015 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45055
I1128 02:54:16.688691  349015 main.go:141] libmachine: () Calling .GetVersion
I1128 02:54:16.689275  349015 main.go:141] libmachine: Using API Version  1
I1128 02:54:16.689311  349015 main.go:141] libmachine: () Calling .SetConfigRaw
I1128 02:54:16.689653  349015 main.go:141] libmachine: () Calling .GetMachineName
I1128 02:54:16.689841  349015 main.go:141] libmachine: (functional-068418) Calling .DriverName
I1128 02:54:16.690090  349015 ssh_runner.go:195] Run: systemctl --version
I1128 02:54:16.690121  349015 main.go:141] libmachine: (functional-068418) Calling .GetSSHHostname
I1128 02:54:16.692944  349015 main.go:141] libmachine: (functional-068418) DBG | domain functional-068418 has defined MAC address 52:54:00:0c:2c:67 in network mk-functional-068418
I1128 02:54:16.693443  349015 main.go:141] libmachine: (functional-068418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:2c:67", ip: ""} in network mk-functional-068418: {Iface:virbr1 ExpiryTime:2023-11-28 03:50:41 +0000 UTC Type:0 Mac:52:54:00:0c:2c:67 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:functional-068418 Clientid:01:52:54:00:0c:2c:67}
I1128 02:54:16.693479  349015 main.go:141] libmachine: (functional-068418) DBG | domain functional-068418 has defined IP address 192.168.39.18 and MAC address 52:54:00:0c:2c:67 in network mk-functional-068418
I1128 02:54:16.693588  349015 main.go:141] libmachine: (functional-068418) Calling .GetSSHPort
I1128 02:54:16.693749  349015 main.go:141] libmachine: (functional-068418) Calling .GetSSHKeyPath
I1128 02:54:16.693884  349015 main.go:141] libmachine: (functional-068418) Calling .GetSSHUsername
I1128 02:54:16.694008  349015 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/functional-068418/id_rsa Username:docker}
I1128 02:54:16.859028  349015 ssh_runner.go:195] Run: sudo crictl images --output json
I1128 02:54:17.001956  349015 main.go:141] libmachine: Making call to close driver server
I1128 02:54:17.001973  349015 main.go:141] libmachine: (functional-068418) Calling .Close
I1128 02:54:17.002222  349015 main.go:141] libmachine: (functional-068418) DBG | Closing plugin on server side
I1128 02:54:17.002236  349015 main.go:141] libmachine: Successfully made call to close driver server
I1128 02:54:17.002257  349015 main.go:141] libmachine: Making call to close connection to plugin binary
I1128 02:54:17.002275  349015 main.go:141] libmachine: Making call to close driver server
I1128 02:54:17.002285  349015 main.go:141] libmachine: (functional-068418) Calling .Close
I1128 02:54:17.002501  349015 main.go:141] libmachine: Successfully made call to close driver server
I1128 02:54:17.002523  349015 main.go:141] libmachine: Making call to close connection to plugin binary
I1128 02:54:17.002545  349015 main.go:141] libmachine: (functional-068418) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-068418 image ls --format yaml --alsologtostderr:
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: bdba757bc9336a536d6884ecfaef00d24c1da3becd41e094eb226076436f258c
repoDigests:
- docker.io/library/mysql@sha256:358b0482ced8103a8691c781e1cb6cd6b5a0b463a6dc0924a7ef357513ecc7a3
- docker.io/library/mysql@sha256:f566819f2eee3a60cf5ea6c8b7d1bfc9de62e34268bf62dc34870c4fca8a85d1
repoTags:
- docker.io/library/mysql:5.7
size: "519653829"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: a6bd71f48f6839d9faae1f29d3babef831e76bc213107682c5cc80f0cbb30866
repoDigests:
- docker.io/library/nginx@sha256:10d1f5b58f74683ad34eb29287e07dab1e90f10af243f151bb50aa5dbb4d62ee
- docker.io/library/nginx@sha256:3c4c1f42a89e343c7b050c5e5d6f670a0e0b82e70e0e7d023f10092a04bbb5a7
repoTags:
- docker.io/library/nginx:latest
size: "190960382"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-068418
size: "34114467"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: a78013498af8f6db8281a75f211dde194f31513bff8789bb78f010d9e70db86f
repoDigests:
- localhost/minikube-local-cache-test@sha256:98c1ca5220d2642a0ebdc1d99da6ca10beb9961f7470a6b2eaa0cc9111b2ec9c
repoTags:
- localhost/minikube-local-cache-test:functional-068418
size: "3345"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-068418 image ls --format yaml --alsologtostderr:
I1128 02:54:16.587972  348993 out.go:296] Setting OutFile to fd 1 ...
I1128 02:54:16.588180  348993 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 02:54:16.588218  348993 out.go:309] Setting ErrFile to fd 2...
I1128 02:54:16.588235  348993 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 02:54:16.588747  348993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
I1128 02:54:16.590227  348993 config.go:182] Loaded profile config "functional-068418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 02:54:16.590403  348993 config.go:182] Loaded profile config "functional-068418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 02:54:16.591085  348993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1128 02:54:16.591170  348993 main.go:141] libmachine: Launching plugin server for driver kvm2
I1128 02:54:16.617448  348993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36279
I1128 02:54:16.618005  348993 main.go:141] libmachine: () Calling .GetVersion
I1128 02:54:16.618667  348993 main.go:141] libmachine: Using API Version  1
I1128 02:54:16.618691  348993 main.go:141] libmachine: () Calling .SetConfigRaw
I1128 02:54:16.619029  348993 main.go:141] libmachine: () Calling .GetMachineName
I1128 02:54:16.619239  348993 main.go:141] libmachine: (functional-068418) Calling .GetState
I1128 02:54:16.621488  348993 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1128 02:54:16.621554  348993 main.go:141] libmachine: Launching plugin server for driver kvm2
I1128 02:54:16.642235  348993 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44569
I1128 02:54:16.643001  348993 main.go:141] libmachine: () Calling .GetVersion
I1128 02:54:16.643570  348993 main.go:141] libmachine: Using API Version  1
I1128 02:54:16.643596  348993 main.go:141] libmachine: () Calling .SetConfigRaw
I1128 02:54:16.643902  348993 main.go:141] libmachine: () Calling .GetMachineName
I1128 02:54:16.644149  348993 main.go:141] libmachine: (functional-068418) Calling .DriverName
I1128 02:54:16.644344  348993 ssh_runner.go:195] Run: systemctl --version
I1128 02:54:16.644371  348993 main.go:141] libmachine: (functional-068418) Calling .GetSSHHostname
I1128 02:54:16.648439  348993 main.go:141] libmachine: (functional-068418) DBG | domain functional-068418 has defined MAC address 52:54:00:0c:2c:67 in network mk-functional-068418
I1128 02:54:16.648781  348993 main.go:141] libmachine: (functional-068418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:2c:67", ip: ""} in network mk-functional-068418: {Iface:virbr1 ExpiryTime:2023-11-28 03:50:41 +0000 UTC Type:0 Mac:52:54:00:0c:2c:67 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:functional-068418 Clientid:01:52:54:00:0c:2c:67}
I1128 02:54:16.648808  348993 main.go:141] libmachine: (functional-068418) DBG | domain functional-068418 has defined IP address 192.168.39.18 and MAC address 52:54:00:0c:2c:67 in network mk-functional-068418
I1128 02:54:16.649161  348993 main.go:141] libmachine: (functional-068418) Calling .GetSSHPort
I1128 02:54:16.649354  348993 main.go:141] libmachine: (functional-068418) Calling .GetSSHKeyPath
I1128 02:54:16.649547  348993 main.go:141] libmachine: (functional-068418) Calling .GetSSHUsername
I1128 02:54:16.649731  348993 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/functional-068418/id_rsa Username:docker}
I1128 02:54:16.769639  348993 ssh_runner.go:195] Run: sudo crictl images --output json
I1128 02:54:16.892729  348993 main.go:141] libmachine: Making call to close driver server
I1128 02:54:16.892751  348993 main.go:141] libmachine: (functional-068418) Calling .Close
I1128 02:54:16.893137  348993 main.go:141] libmachine: Successfully made call to close driver server
I1128 02:54:16.893189  348993 main.go:141] libmachine: Making call to close connection to plugin binary
I1128 02:54:16.893206  348993 main.go:141] libmachine: Making call to close driver server
I1128 02:54:16.893217  348993 main.go:141] libmachine: (functional-068418) Calling .Close
I1128 02:54:16.893219  348993 main.go:141] libmachine: (functional-068418) DBG | Closing plugin on server side
I1128 02:54:16.893508  348993 main.go:141] libmachine: Successfully made call to close driver server
I1128 02:54:16.893529  348993 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068418 ssh pgrep buildkitd: exit status 1 (306.843533ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image build -t localhost/my-image:functional-068418 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 image build -t localhost/my-image:functional-068418 testdata/build --alsologtostderr: (3.61101609s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-068418 image build -t localhost/my-image:functional-068418 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 3c48b7fbcd2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-068418
--> 054f9f78680
Successfully tagged localhost/my-image:functional-068418
054f9f78680b0caacd389b96455fe8ebf1932a460e80e8c765f8a78d46a27b07
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-068418 image build -t localhost/my-image:functional-068418 testdata/build --alsologtostderr:
I1128 02:54:16.880304  349069 out.go:296] Setting OutFile to fd 1 ...
I1128 02:54:16.880506  349069 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 02:54:16.880518  349069 out.go:309] Setting ErrFile to fd 2...
I1128 02:54:16.880525  349069 out.go:343] TERM=,COLORTERM=, which probably does not support color
I1128 02:54:16.880743  349069 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
I1128 02:54:16.881447  349069 config.go:182] Loaded profile config "functional-068418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 02:54:16.882046  349069 config.go:182] Loaded profile config "functional-068418": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I1128 02:54:16.882491  349069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1128 02:54:16.882558  349069 main.go:141] libmachine: Launching plugin server for driver kvm2
I1128 02:54:16.899729  349069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34543
I1128 02:54:16.900266  349069 main.go:141] libmachine: () Calling .GetVersion
I1128 02:54:16.900909  349069 main.go:141] libmachine: Using API Version  1
I1128 02:54:16.900931  349069 main.go:141] libmachine: () Calling .SetConfigRaw
I1128 02:54:16.901317  349069 main.go:141] libmachine: () Calling .GetMachineName
I1128 02:54:16.901561  349069 main.go:141] libmachine: (functional-068418) Calling .GetState
I1128 02:54:16.903536  349069 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1128 02:54:16.903594  349069 main.go:141] libmachine: Launching plugin server for driver kvm2
I1128 02:54:16.920527  349069 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36019
I1128 02:54:16.921028  349069 main.go:141] libmachine: () Calling .GetVersion
I1128 02:54:16.921640  349069 main.go:141] libmachine: Using API Version  1
I1128 02:54:16.921690  349069 main.go:141] libmachine: () Calling .SetConfigRaw
I1128 02:54:16.922097  349069 main.go:141] libmachine: () Calling .GetMachineName
I1128 02:54:16.922311  349069 main.go:141] libmachine: (functional-068418) Calling .DriverName
I1128 02:54:16.922597  349069 ssh_runner.go:195] Run: systemctl --version
I1128 02:54:16.922627  349069 main.go:141] libmachine: (functional-068418) Calling .GetSSHHostname
I1128 02:54:16.925952  349069 main.go:141] libmachine: (functional-068418) DBG | domain functional-068418 has defined MAC address 52:54:00:0c:2c:67 in network mk-functional-068418
I1128 02:54:16.926309  349069 main.go:141] libmachine: (functional-068418) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0c:2c:67", ip: ""} in network mk-functional-068418: {Iface:virbr1 ExpiryTime:2023-11-28 03:50:41 +0000 UTC Type:0 Mac:52:54:00:0c:2c:67 Iaid: IPaddr:192.168.39.18 Prefix:24 Hostname:functional-068418 Clientid:01:52:54:00:0c:2c:67}
I1128 02:54:16.926365  349069 main.go:141] libmachine: (functional-068418) DBG | domain functional-068418 has defined IP address 192.168.39.18 and MAC address 52:54:00:0c:2c:67 in network mk-functional-068418
I1128 02:54:16.926604  349069 main.go:141] libmachine: (functional-068418) Calling .GetSSHPort
I1128 02:54:16.926876  349069 main.go:141] libmachine: (functional-068418) Calling .GetSSHKeyPath
I1128 02:54:16.927049  349069 main.go:141] libmachine: (functional-068418) Calling .GetSSHUsername
I1128 02:54:16.927227  349069 sshutil.go:53] new ssh client: &{IP:192.168.39.18 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/functional-068418/id_rsa Username:docker}
I1128 02:54:17.070613  349069 build_images.go:151] Building image from path: /tmp/build.859769618.tar
I1128 02:54:17.070699  349069 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1128 02:54:17.102540  349069 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.859769618.tar
I1128 02:54:17.122103  349069 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.859769618.tar: stat -c "%s %y" /var/lib/minikube/build/build.859769618.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.859769618.tar': No such file or directory
I1128 02:54:17.122151  349069 ssh_runner.go:362] scp /tmp/build.859769618.tar --> /var/lib/minikube/build/build.859769618.tar (3072 bytes)
I1128 02:54:17.188114  349069 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.859769618
I1128 02:54:17.226639  349069 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.859769618 -xf /var/lib/minikube/build/build.859769618.tar
I1128 02:54:17.244349  349069 crio.go:297] Building image: /var/lib/minikube/build/build.859769618
I1128 02:54:17.244445  349069 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-068418 /var/lib/minikube/build/build.859769618 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1128 02:54:20.397488  349069 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-068418 /var/lib/minikube/build/build.859769618 --cgroup-manager=cgroupfs: (3.153008369s)
I1128 02:54:20.397575  349069 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.859769618
I1128 02:54:20.413049  349069 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.859769618.tar
I1128 02:54:20.425404  349069 build_images.go:207] Built localhost/my-image:functional-068418 from /tmp/build.859769618.tar
I1128 02:54:20.425449  349069 build_images.go:123] succeeded building to: functional-068418
I1128 02:54:20.425456  349069 build_images.go:124] failed building to: 
I1128 02:54:20.425523  349069 main.go:141] libmachine: Making call to close driver server
I1128 02:54:20.425549  349069 main.go:141] libmachine: (functional-068418) Calling .Close
I1128 02:54:20.425921  349069 main.go:141] libmachine: (functional-068418) DBG | Closing plugin on server side
I1128 02:54:20.425971  349069 main.go:141] libmachine: Successfully made call to close driver server
I1128 02:54:20.425982  349069 main.go:141] libmachine: Making call to close connection to plugin binary
I1128 02:54:20.426001  349069 main.go:141] libmachine: Making call to close driver server
I1128 02:54:20.426014  349069 main.go:141] libmachine: (functional-068418) Calling .Close
I1128 02:54:20.426295  349069 main.go:141] libmachine: Successfully made call to close driver server
I1128 02:54:20.426316  349069 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 image ls: (1.074719281s)
2023/11/28 02:54:22 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-068418
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-068418 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-068418 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-8tvrr" [74f83907-a7b5-44c2-8ee3-7bbc717fa627] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-8tvrr" [74f83907-a7b5-44c2-8ee3-7bbc717fa627] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.032164807s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image load --daemon gcr.io/google-containers/addon-resizer:functional-068418 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 image load --daemon gcr.io/google-containers/addon-resizer:functional-068418 --alsologtostderr: (5.927999818s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (6.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image load --daemon gcr.io/google-containers/addon-resizer:functional-068418 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 image load --daemon gcr.io/google-containers/addon-resizer:functional-068418 --alsologtostderr: (2.325287435s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
E1128 02:53:43.673728  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 02:53:43.679712  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 02:53:43.689971  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 02:53:43.710252  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 02:53:43.750584  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 02:53:43.830704  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 02:53:43.991261  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-068418
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image load --daemon gcr.io/google-containers/addon-resizer:functional-068418 --alsologtostderr
E1128 02:53:44.312422  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 02:53:44.953213  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 02:53:46.234010  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 image load --daemon gcr.io/google-containers/addon-resizer:functional-068418 --alsologtostderr: (6.710287993s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 service list -o json
functional_test.go:1493: Took "638.895909ms" to run "out/minikube-linux-amd64 -p functional-068418 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.18:30965
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 service hello-node --url --format={{.IP}}
E1128 02:53:48.794804  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.18:30965
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image save gcr.io/google-containers/addon-resizer:functional-068418 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 image save gcr.io/google-containers/addon-resizer:functional-068418 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.110205158s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "343.185679ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "73.541579ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "410.384156ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "78.798352ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (19.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-068418 /tmp/TestFunctionalparallelMountCmdany-port3735248720/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1701140032275173927" to /tmp/TestFunctionalparallelMountCmdany-port3735248720/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1701140032275173927" to /tmp/TestFunctionalparallelMountCmdany-port3735248720/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1701140032275173927" to /tmp/TestFunctionalparallelMountCmdany-port3735248720/001/test-1701140032275173927
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068418 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (320.172892ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 28 02:53 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 28 02:53 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 28 02:53 test-1701140032275173927
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh cat /mount-9p/test-1701140032275173927
E1128 02:53:53.914954  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-068418 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [9f147d40-5463-4fd7-8862-32748eb7c80f] Pending
helpers_test.go:344: "busybox-mount" [9f147d40-5463-4fd7-8862-32748eb7c80f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [9f147d40-5463-4fd7-8862-32748eb7c80f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [9f147d40-5463-4fd7-8862-32748eb7c80f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 16.01457433s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-068418 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-068418 /tmp/TestFunctionalparallelMountCmdany-port3735248720/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (19.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image rm gcr.io/google-containers/addon-resizer:functional-068418 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 image rm gcr.io/google-containers/addon-resizer:functional-068418 --alsologtostderr: (1.200012689s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (5.418804405s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 image ls: (1.134824403s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (6.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (6.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-068418
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 image save --daemon gcr.io/google-containers/addon-resizer:functional-068418 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-068418 image save --daemon gcr.io/google-containers/addon-resizer:functional-068418 --alsologtostderr: (6.249687477s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-068418
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (6.29s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-068418 /tmp/TestFunctionalparallelMountCmdspecific-port432006091/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068418 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (243.78436ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-068418 /tmp/TestFunctionalparallelMountCmdspecific-port432006091/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068418 ssh "sudo umount -f /mount-9p": exit status 1 (295.920004ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-068418 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-068418 /tmp/TestFunctionalparallelMountCmdspecific-port432006091/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-068418 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2152534551/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-068418 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2152534551/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-068418 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2152534551/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-068418 ssh "findmnt -T" /mount1: exit status 1 (303.28081ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-068418 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-068418 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-068418 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2152534551/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-068418 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2152534551/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-068418 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2152534551/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.84s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-068418
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-068418
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-068418
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (105.81s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-648725 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1128 02:54:24.636791  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 02:55:05.597896  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-648725 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m45.81379197s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (105.81s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.06s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-648725 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-648725 addons enable ingress --alsologtostderr -v=5: (13.056654021s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (13.06s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-648725 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.59s)

                                                
                                    
x
+
TestJSONOutput/start/Command (98.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-453581 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1128 02:59:15.186037  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 02:59:56.146252  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-453581 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m38.476668824s)
--- PASS: TestJSONOutput/start/Command (98.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-453581 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-453581 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-453581 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-453581 --output=json --user=testUser: (7.112252657s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-632440 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-632440 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (83.95619ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"53912f48-b19b-4828-89bc-fd8345db69d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-632440] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"22560f10-1eec-4597-bc29-a5d9c851865a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17671"}}
	{"specversion":"1.0","id":"3dbe66df-34ab-43f8-802f-af37fd2aecf6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a326383b-8e4d-49a3-9463-e5f8a4201055","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig"}}
	{"specversion":"1.0","id":"abc018b0-dd52-4505-8d72-75f61c3654ef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube"}}
	{"specversion":"1.0","id":"0f5c22fc-8c37-4e1e-805a-b620791a4d18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3818f9d6-92fb-49d4-96df-0b5f66f00b12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"81f4001a-0eb2-4155-b9e1-4b28b9618a94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-632440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-632440
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (97.77s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-068653 --driver=kvm2  --container-runtime=crio
E1128 03:01:18.068766  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 03:01:23.484244  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:01:23.489509  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:01:23.499760  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:01:23.520021  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:01:23.560381  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:01:23.640853  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:01:23.801371  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:01:24.121966  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:01:24.762992  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:01:26.043471  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:01:28.605250  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:01:33.725450  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:01:43.966199  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-068653 --driver=kvm2  --container-runtime=crio: (47.586422833s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-072550 --driver=kvm2  --container-runtime=crio
E1128 03:02:04.447140  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-072550 --driver=kvm2  --container-runtime=crio: (47.440387472s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-068653
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-072550
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-072550" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-072550
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-072550: (1.032102224s)
helpers_test.go:175: Cleaning up "first-068653" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-068653
--- PASS: TestMinikubeProfile (97.77s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-439948 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E1128 03:02:45.407520  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-439948 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.076086608s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-439948 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-439948 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-463819 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-463819 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.626370758s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-463819 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-463819 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.43s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-439948 --alsologtostderr -v=5
E1128 03:03:34.222472  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
--- PASS: TestMountStart/serial/DeleteFirst (0.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-463819 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-463819 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.43s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-463819
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-463819: (1.167853625s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.89s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-463819
E1128 03:03:43.674052  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-463819: (20.894182923s)
--- PASS: TestMountStart/serial/RestartStopped (21.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-463819 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-463819 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (118.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-112998 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1128 03:04:01.909025  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 03:04:07.328656  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-112998 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m58.309327331s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (118.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-112998 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-112998 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-112998 -- rollout status deployment/busybox: (2.369643769s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-112998 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-112998 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-112998 -- exec busybox-5bc68d56bd-cbjtg -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-112998 -- exec busybox-5bc68d56bd-pmx8j -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-112998 -- exec busybox-5bc68d56bd-cbjtg -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-112998 -- exec busybox-5bc68d56bd-pmx8j -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-112998 -- exec busybox-5bc68d56bd-cbjtg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-112998 -- exec busybox-5bc68d56bd-pmx8j -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.27s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-112998 -v 3 --alsologtostderr
E1128 03:06:23.484516  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:06:51.169807  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-112998 -v 3 --alsologtostderr: (48.162232396s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.77s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 cp testdata/cp-test.txt multinode-112998:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 cp multinode-112998:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2985400018/001/cp-test_multinode-112998.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 cp multinode-112998:/home/docker/cp-test.txt multinode-112998-m02:/home/docker/cp-test_multinode-112998_multinode-112998-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998-m02 "sudo cat /home/docker/cp-test_multinode-112998_multinode-112998-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 cp multinode-112998:/home/docker/cp-test.txt multinode-112998-m03:/home/docker/cp-test_multinode-112998_multinode-112998-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998-m03 "sudo cat /home/docker/cp-test_multinode-112998_multinode-112998-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 cp testdata/cp-test.txt multinode-112998-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 cp multinode-112998-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2985400018/001/cp-test_multinode-112998-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 cp multinode-112998-m02:/home/docker/cp-test.txt multinode-112998:/home/docker/cp-test_multinode-112998-m02_multinode-112998.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998 "sudo cat /home/docker/cp-test_multinode-112998-m02_multinode-112998.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 cp multinode-112998-m02:/home/docker/cp-test.txt multinode-112998-m03:/home/docker/cp-test_multinode-112998-m02_multinode-112998-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998-m03 "sudo cat /home/docker/cp-test_multinode-112998-m02_multinode-112998-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 cp testdata/cp-test.txt multinode-112998-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 cp multinode-112998-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2985400018/001/cp-test_multinode-112998-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 cp multinode-112998-m03:/home/docker/cp-test.txt multinode-112998:/home/docker/cp-test_multinode-112998-m03_multinode-112998.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998 "sudo cat /home/docker/cp-test_multinode-112998-m03_multinode-112998.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 cp multinode-112998-m03:/home/docker/cp-test.txt multinode-112998-m02:/home/docker/cp-test_multinode-112998-m03_multinode-112998-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 ssh -n multinode-112998-m02 "sudo cat /home/docker/cp-test_multinode-112998-m03_multinode-112998-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-112998 node stop m03: (2.097809943s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-112998 status: exit status 7 (455.402991ms)

                                                
                                                
-- stdout --
	multinode-112998
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-112998-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-112998-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-112998 status --alsologtostderr: exit status 7 (453.098522ms)

                                                
                                                
-- stdout --
	multinode-112998
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-112998-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-112998-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 03:07:05.336744  356016 out.go:296] Setting OutFile to fd 1 ...
	I1128 03:07:05.337026  356016 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:07:05.337037  356016 out.go:309] Setting ErrFile to fd 2...
	I1128 03:07:05.337042  356016 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:07:05.337245  356016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 03:07:05.337429  356016 out.go:303] Setting JSON to false
	I1128 03:07:05.337459  356016 mustload.go:65] Loading cluster: multinode-112998
	I1128 03:07:05.337595  356016 notify.go:220] Checking for updates...
	I1128 03:07:05.337816  356016 config.go:182] Loaded profile config "multinode-112998": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:07:05.337830  356016 status.go:255] checking status of multinode-112998 ...
	I1128 03:07:05.338276  356016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:07:05.338351  356016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:07:05.358756  356016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40289
	I1128 03:07:05.359251  356016 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:07:05.359778  356016 main.go:141] libmachine: Using API Version  1
	I1128 03:07:05.359831  356016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:07:05.360323  356016 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:07:05.360548  356016 main.go:141] libmachine: (multinode-112998) Calling .GetState
	I1128 03:07:05.362335  356016 status.go:330] multinode-112998 host status = "Running" (err=<nil>)
	I1128 03:07:05.362352  356016 host.go:66] Checking if "multinode-112998" exists ...
	I1128 03:07:05.362616  356016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:07:05.362651  356016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:07:05.377001  356016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39577
	I1128 03:07:05.377428  356016 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:07:05.377912  356016 main.go:141] libmachine: Using API Version  1
	I1128 03:07:05.377936  356016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:07:05.378259  356016 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:07:05.378530  356016 main.go:141] libmachine: (multinode-112998) Calling .GetIP
	I1128 03:07:05.381589  356016 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:07:05.382090  356016 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:07:05.382122  356016 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:07:05.382206  356016 host.go:66] Checking if "multinode-112998" exists ...
	I1128 03:07:05.382503  356016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:07:05.382545  356016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:07:05.396733  356016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37223
	I1128 03:07:05.397182  356016 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:07:05.397621  356016 main.go:141] libmachine: Using API Version  1
	I1128 03:07:05.397646  356016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:07:05.397996  356016 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:07:05.398201  356016 main.go:141] libmachine: (multinode-112998) Calling .DriverName
	I1128 03:07:05.398397  356016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 03:07:05.398446  356016 main.go:141] libmachine: (multinode-112998) Calling .GetSSHHostname
	I1128 03:07:05.401278  356016 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:07:05.401724  356016 main.go:141] libmachine: (multinode-112998) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:78:69:e6", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:04:15 +0000 UTC Type:0 Mac:52:54:00:78:69:e6 Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:multinode-112998 Clientid:01:52:54:00:78:69:e6}
	I1128 03:07:05.401758  356016 main.go:141] libmachine: (multinode-112998) DBG | domain multinode-112998 has defined IP address 192.168.39.73 and MAC address 52:54:00:78:69:e6 in network mk-multinode-112998
	I1128 03:07:05.401904  356016 main.go:141] libmachine: (multinode-112998) Calling .GetSSHPort
	I1128 03:07:05.402068  356016 main.go:141] libmachine: (multinode-112998) Calling .GetSSHKeyPath
	I1128 03:07:05.402182  356016 main.go:141] libmachine: (multinode-112998) Calling .GetSSHUsername
	I1128 03:07:05.402369  356016 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998/id_rsa Username:docker}
	I1128 03:07:05.496753  356016 ssh_runner.go:195] Run: systemctl --version
	I1128 03:07:05.502665  356016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 03:07:05.516276  356016 kubeconfig.go:92] found "multinode-112998" server: "https://192.168.39.73:8443"
	I1128 03:07:05.516304  356016 api_server.go:166] Checking apiserver status ...
	I1128 03:07:05.516337  356016 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1128 03:07:05.527783  356016 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1079/cgroup
	I1128 03:07:05.535849  356016 api_server.go:182] apiserver freezer: "11:freezer:/kubepods/burstable/podf38601fa395350043ca26b7c11be4397/crio-e770ed13f86210c2d4b5b91717591f2c9f166049855e5167e949d596ea038ac0"
	I1128 03:07:05.535915  356016 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podf38601fa395350043ca26b7c11be4397/crio-e770ed13f86210c2d4b5b91717591f2c9f166049855e5167e949d596ea038ac0/freezer.state
	I1128 03:07:05.544752  356016 api_server.go:204] freezer state: "THAWED"
	I1128 03:07:05.544784  356016 api_server.go:253] Checking apiserver healthz at https://192.168.39.73:8443/healthz ...
	I1128 03:07:05.549795  356016 api_server.go:279] https://192.168.39.73:8443/healthz returned 200:
	ok
	I1128 03:07:05.549815  356016 status.go:421] multinode-112998 apiserver status = Running (err=<nil>)
	I1128 03:07:05.549824  356016 status.go:257] multinode-112998 status: &{Name:multinode-112998 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1128 03:07:05.549839  356016 status.go:255] checking status of multinode-112998-m02 ...
	I1128 03:07:05.550124  356016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:07:05.550166  356016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:07:05.565203  356016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43885
	I1128 03:07:05.565647  356016 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:07:05.566058  356016 main.go:141] libmachine: Using API Version  1
	I1128 03:07:05.566083  356016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:07:05.566414  356016 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:07:05.566618  356016 main.go:141] libmachine: (multinode-112998-m02) Calling .GetState
	I1128 03:07:05.568238  356016 status.go:330] multinode-112998-m02 host status = "Running" (err=<nil>)
	I1128 03:07:05.568254  356016 host.go:66] Checking if "multinode-112998-m02" exists ...
	I1128 03:07:05.568577  356016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:07:05.568621  356016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:07:05.582922  356016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33613
	I1128 03:07:05.583313  356016 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:07:05.583759  356016 main.go:141] libmachine: Using API Version  1
	I1128 03:07:05.583781  356016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:07:05.584088  356016 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:07:05.584265  356016 main.go:141] libmachine: (multinode-112998-m02) Calling .GetIP
	I1128 03:07:05.587139  356016 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:07:05.587553  356016 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:07:05.587591  356016 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:07:05.587738  356016 host.go:66] Checking if "multinode-112998-m02" exists ...
	I1128 03:07:05.588109  356016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:07:05.588145  356016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:07:05.603256  356016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43781
	I1128 03:07:05.603653  356016 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:07:05.604106  356016 main.go:141] libmachine: Using API Version  1
	I1128 03:07:05.604129  356016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:07:05.604530  356016 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:07:05.604790  356016 main.go:141] libmachine: (multinode-112998-m02) Calling .DriverName
	I1128 03:07:05.605019  356016 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1128 03:07:05.605051  356016 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHHostname
	I1128 03:07:05.607444  356016 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:07:05.607842  356016 main.go:141] libmachine: (multinode-112998-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:32:00", ip: ""} in network mk-multinode-112998: {Iface:virbr1 ExpiryTime:2023-11-28 04:05:22 +0000 UTC Type:0 Mac:52:54:00:f0:32:00 Iaid: IPaddr:192.168.39.31 Prefix:24 Hostname:multinode-112998-m02 Clientid:01:52:54:00:f0:32:00}
	I1128 03:07:05.607873  356016 main.go:141] libmachine: (multinode-112998-m02) DBG | domain multinode-112998-m02 has defined IP address 192.168.39.31 and MAC address 52:54:00:f0:32:00 in network mk-multinode-112998
	I1128 03:07:05.608015  356016 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHPort
	I1128 03:07:05.608222  356016 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHKeyPath
	I1128 03:07:05.608406  356016 main.go:141] libmachine: (multinode-112998-m02) Calling .GetSSHUsername
	I1128 03:07:05.608569  356016 sshutil.go:53] new ssh client: &{IP:192.168.39.31 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17671-333305/.minikube/machines/multinode-112998-m02/id_rsa Username:docker}
	I1128 03:07:05.696270  356016 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1128 03:07:05.708522  356016 status.go:257] multinode-112998-m02 status: &{Name:multinode-112998-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1128 03:07:05.708564  356016 status.go:255] checking status of multinode-112998-m03 ...
	I1128 03:07:05.708942  356016 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1128 03:07:05.709010  356016 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1128 03:07:05.724264  356016 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43483
	I1128 03:07:05.724695  356016 main.go:141] libmachine: () Calling .GetVersion
	I1128 03:07:05.725311  356016 main.go:141] libmachine: Using API Version  1
	I1128 03:07:05.725339  356016 main.go:141] libmachine: () Calling .SetConfigRaw
	I1128 03:07:05.725650  356016 main.go:141] libmachine: () Calling .GetMachineName
	I1128 03:07:05.725809  356016 main.go:141] libmachine: (multinode-112998-m03) Calling .GetState
	I1128 03:07:05.727602  356016 status.go:330] multinode-112998-m03 host status = "Stopped" (err=<nil>)
	I1128 03:07:05.727620  356016 status.go:343] host is not running, skipping remaining checks
	I1128 03:07:05.727627  356016 status.go:257] multinode-112998-m03 status: &{Name:multinode-112998-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.01s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 node start m03 --alsologtostderr
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-112998 node start m03 --alsologtostderr: (29.105456441s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 node delete m03
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.53s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (439.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-112998 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1128 03:23:34.222775  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 03:23:43.673953  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 03:26:23.483885  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:26:46.722778  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 03:28:34.222775  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 03:28:43.673982  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-112998 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m18.788124494s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-112998 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (439.35s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (53.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-112998
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-112998-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-112998-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (81.37116ms)

                                                
                                                
-- stdout --
	* [multinode-112998-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-112998-m02' is duplicated with machine name 'multinode-112998-m02' in profile 'multinode-112998'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-112998-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-112998-m03 --driver=kvm2  --container-runtime=crio: (52.27068271s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-112998
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-112998: exit status 80 (237.847167ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-112998
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-112998-m03 already exists in multinode-112998-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-112998-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (53.43s)

                                                
                                    
x
+
TestScheduledStopUnix (120.52s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-850740 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-850740 --memory=2048 --driver=kvm2  --container-runtime=crio: (48.682871443s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-850740 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-850740 -n scheduled-stop-850740
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-850740 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-850740 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-850740 -n scheduled-stop-850740
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-850740
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-850740 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1128 03:36:23.483897  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-850740
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-850740: exit status 7 (82.936091ms)

                                                
                                                
-- stdout --
	scheduled-stop-850740
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-850740 -n scheduled-stop-850740
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-850740 -n scheduled-stop-850740: exit status 7 (75.975636ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-850740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-850740
--- PASS: TestScheduledStopUnix (120.52s)

                                                
                                    
x
+
TestKubernetesUpgrade (164.26s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-779675 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-779675 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m33.879436676s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-779675
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-779675: (2.210476783s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-779675 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-779675 status --format={{.Host}}: exit status 7 (99.648461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-779675 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-779675 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (43.662590739s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-779675 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-779675 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-779675 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (118.489977ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-779675] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.0 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-779675
	    minikube start -p kubernetes-upgrade-779675 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7796752 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.0, by running:
	    
	    minikube start -p kubernetes-upgrade-779675 --kubernetes-version=v1.29.0-rc.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-779675 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-779675 --memory=2200 --kubernetes-version=v1.29.0-rc.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (23.08271151s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-779675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-779675
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-779675: (1.137912428s)
--- PASS: TestKubernetesUpgrade (164.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-477815 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-477815 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (104.515311ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-477815] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (77.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-477815 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-477815 --driver=kvm2  --container-runtime=crio: (1m17.025002449s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-477815 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (77.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-546871 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-546871 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (193.609428ms)

                                                
                                                
-- stdout --
	* [false-546871] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17671
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1128 03:36:42.664420  364434 out.go:296] Setting OutFile to fd 1 ...
	I1128 03:36:42.664599  364434 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:36:42.664617  364434 out.go:309] Setting ErrFile to fd 2...
	I1128 03:36:42.664626  364434 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1128 03:36:42.664869  364434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17671-333305/.minikube/bin
	I1128 03:36:42.665584  364434 out.go:303] Setting JSON to false
	I1128 03:36:42.666812  364434 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8353,"bootTime":1701134250,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1046-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1128 03:36:42.666883  364434 start.go:138] virtualization: kvm guest
	I1128 03:36:42.669459  364434 out.go:177] * [false-546871] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I1128 03:36:42.671476  364434 out.go:177]   - MINIKUBE_LOCATION=17671
	I1128 03:36:42.673075  364434 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1128 03:36:42.671501  364434 notify.go:220] Checking for updates...
	I1128 03:36:42.675816  364434 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17671-333305/kubeconfig
	I1128 03:36:42.677424  364434 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17671-333305/.minikube
	I1128 03:36:42.678842  364434 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1128 03:36:42.680727  364434 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1128 03:36:42.683074  364434 config.go:182] Loaded profile config "NoKubernetes-477815": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:36:42.683236  364434 config.go:182] Loaded profile config "offline-crio-428381": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I1128 03:36:42.683309  364434 config.go:182] Loaded profile config "running-upgrade-498123": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I1128 03:36:42.683426  364434 driver.go:378] Setting default libvirt URI to qemu:///system
	I1128 03:36:42.778579  364434 out.go:177] * Using the kvm2 driver based on user configuration
	I1128 03:36:42.779788  364434 start.go:298] selected driver: kvm2
	I1128 03:36:42.779803  364434 start.go:902] validating driver "kvm2" against <nil>
	I1128 03:36:42.779817  364434 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1128 03:36:42.782122  364434 out.go:177] 
	W1128 03:36:42.783561  364434 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1128 03:36:42.785023  364434 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-546871 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-546871

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-546871

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-546871

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-546871

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-546871

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-546871

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-546871

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-546871

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-546871

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-546871

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-546871

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-546871" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-546871" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-546871

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-546871"

                                                
                                                
----------------------- debugLogs end: false-546871 [took: 3.538868803s] --------------------------------
helpers_test.go:175: Cleaning up "false-546871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-546871
--- PASS: TestNetworkPlugins/group/false (3.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.79s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-477815 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-477815 --no-kubernetes --driver=kvm2  --container-runtime=crio: (9.900392545s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-477815 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-477815 status -o json: exit status 2 (275.907775ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-477815","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-477815
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-477815: (1.611846388s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.79s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-477815 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-477815 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.64466594s)
--- PASS: TestNoKubernetes/serial/Start (27.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-477815 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-477815 "sudo systemctl is-active --quiet service kubelet": exit status 1 (225.551382ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.23s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-477815
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-477815: (1.180433871s)
--- PASS: TestNoKubernetes/serial/Stop (1.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (72.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-477815 --driver=kvm2  --container-runtime=crio
E1128 03:38:43.673911  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-477815 --driver=kvm2  --container-runtime=crio: (1m12.011983358s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (72.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-477815 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-477815 "sudo systemctl is-active --quiet service kubelet": exit status 1 (233.048548ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.35s)

                                                
                                    
x
+
TestPause/serial/Start (108.53s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-832446 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
E1128 03:41:23.483911  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-832446 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m48.534500488s)
--- PASS: TestPause/serial/Start (108.53s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.96s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-832446 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-832446 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (29.929605965s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (103.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-546871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-546871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m43.64117447s)
--- PASS: TestNetworkPlugins/group/auto/Start (103.64s)

                                                
                                    
x
+
TestPause/serial/Pause (0.78s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-832446 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.78s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-832446 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-832446 --output=json --layout=cluster: exit status 2 (291.977532ms)

                                                
                                                
-- stdout --
	{"Name":"pause-832446","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-832446","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.8s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-832446 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.80s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.06s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-832446 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-832446 --alsologtostderr -v=5: (1.064056589s)
--- PASS: TestPause/serial/PauseAgain (1.06s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.08s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-832446 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-832446 --alsologtostderr -v=5: (1.079406232s)
--- PASS: TestPause/serial/DeletePaused (1.08s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.4s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-546871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-546871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m15.703670709s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (107.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-546871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E1128 03:43:26.723188  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-546871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m47.845765127s)
--- PASS: TestNetworkPlugins/group/calico/Start (107.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-546871 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-546871 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-frm9t" [f261b767-91eb-413e-ab07-40d132e0064a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-frm9t" [f261b767-91eb-413e-ab07-40d132e0064a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.016403505s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-q28t4" [c307a79f-5e18-47dc-8d2d-9b8f9ee97732] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.028748047s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-546871 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-546871 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rwn6b" [29fc7d0c-6fdc-46bc-8170-7b1a27b3a02a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rwn6b" [29fc7d0c-6fdc-46bc-8170-7b1a27b3a02a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.019585285s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-546871 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-546871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-546871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-546871 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-546871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-546871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-268578
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (91.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-546871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-546871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m31.515330706s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (91.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (128.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-546871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-546871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m8.645689707s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (128.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (130.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-546871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-546871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m10.74727605s)
--- PASS: TestNetworkPlugins/group/flannel/Start (130.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-pknxn" [5625ca0a-b7a3-4beb-9f7c-1a3f75d278ca] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.031715996s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-546871 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-546871 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-rkjjx" [8e92c2ed-34d8-4b10-8760-dbbea49c4844] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-rkjjx" [8e92c2ed-34d8-4b10-8760-dbbea49c4844] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.016585501s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-546871 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-546871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-546871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (122.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-546871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-546871 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m2.550895626s)
--- PASS: TestNetworkPlugins/group/bridge/Start (122.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-546871 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-546871 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4qm24" [e1e011a0-d851-444c-8635-0791c6013e54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1128 03:46:23.483786  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-4qm24" [e1e011a0-d851-444c-8635-0791c6013e54] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.013714618s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-546871 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-546871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-546871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (136.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-666657 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-666657 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m16.187219205s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (136.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-546871 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-546871 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-6wkhl" [34c78b7d-93cd-47f3-b034-20622705df32] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-6wkhl" [34c78b7d-93cd-47f3-b034-20622705df32] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.011769478s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-7c6kx" [06d8d03a-a0fe-4214-af7a-7e3ef1154919] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.023722096s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-546871 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-546871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-546871 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-546871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-546871 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-xtdrx" [5e629114-189f-4d59-b84c-91f57f0875b7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-xtdrx" [5e629114-189f-4d59-b84c-91f57f0875b7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.012432979s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-546871 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-546871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-546871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (126.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-222348 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-222348 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (2m6.70454536s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (126.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-725962 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-725962 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m56.974154397s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (116.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-546871 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-546871 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-nc8zj" [086438a7-0455-40dc-b926-d14e0c45dac1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-nc8zj" [086438a7-0455-40dc-b926-d14e0c45dac1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.012391766s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-546871 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-546871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-546871 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)
E1128 04:16:58.568865  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 04:17:05.902958  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (65.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-644411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
E1128 03:48:34.223184  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 03:48:43.674108  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-644411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (1m5.084707684s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (65.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-666657 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6761a74a-c363-4ea2-9fe3-5615e11e89ff] Pending
helpers_test.go:344: "busybox" [6761a74a-c363-4ea2-9fe3-5615e11e89ff] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6761a74a-c363-4ea2-9fe3-5615e11e89ff] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.034762802s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-666657 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-666657 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-666657 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-644411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-644411 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.532931976s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-222348 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [11f46e0c-9fa1-44ae-99e2-5c62c179e72f] Pending
helpers_test.go:344: "busybox" [11f46e0c-9fa1-44ae-99e2-5c62c179e72f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1128 03:49:39.289654  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:49:39.506782  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
helpers_test.go:344: "busybox" [11f46e0c-9fa1-44ae-99e2-5c62c179e72f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.022911462s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-222348 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-725962 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [74311fc7-06a5-4161-8803-f0ff8bf14071] Pending
helpers_test.go:344: "busybox" [74311fc7-06a5-4161-8803-f0ff8bf14071] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [74311fc7-06a5-4161-8803-f0ff8bf14071] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.031514197s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-725962 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-222348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-222348 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.023529907s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-222348 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-725962 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-725962 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.080999308s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-725962 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (792.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-666657 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-666657 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (13m12.299821095s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-666657 -n old-k8s-version-666657
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (792.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (311.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-644411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
E1128 03:52:03.689156  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 03:52:05.903136  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 03:52:05.908416  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 03:52:05.918668  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 03:52:05.938952  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 03:52:05.979267  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 03:52:06.059704  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-644411 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (5m10.794381598s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-644411 -n newest-cni-644411
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (311.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (629.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-222348 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0
E1128 03:52:19.050570  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-222348 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.0: (10m28.729796084s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-222348 -n no-preload-222348
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (629.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (600.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-725962 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E1128 03:52:26.385858  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 03:52:39.531266  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 03:52:39.762640  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:52:46.866052  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 03:52:54.102537  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:52:55.195958  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:52:55.201240  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:52:55.211503  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:52:55.231766  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:52:55.272051  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:52:55.352359  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:52:55.512806  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:52:55.833579  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:52:56.474726  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:52:57.755619  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:53:00.316301  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:53:05.436687  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:53:15.677384  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:53:20.492068  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 03:53:27.826294  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 03:53:34.222721  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 03:53:36.157662  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:53:43.674087  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 03:54:01.683877  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:54:17.118309  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:54:18.806670  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:54:19.025188  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 03:54:42.412922  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 03:54:46.492695  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:54:46.710220  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
E1128 03:54:49.746639  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 03:55:10.257849  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:55:37.943084  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/calico-546871/client.crt: no such file or directory
E1128 03:55:39.038937  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:56:17.839836  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:56:23.483750  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/ingress-addon-legacy-648725/client.crt: no such file or directory
E1128 03:56:45.524875  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/custom-flannel-546871/client.crt: no such file or directory
E1128 03:56:58.569264  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 03:57:05.902919  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-725962 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m59.989196823s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-725962 -n default-k8s-diff-port-725962
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (600.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-644411 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-644411 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-644411 -n newest-cni-644411
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-644411 -n newest-cni-644411: exit status 2 (276.870376ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-644411 -n newest-cni-644411
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-644411 -n newest-cni-644411: exit status 2 (288.635106ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-644411 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-644411 -n newest-cni-644411
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-644411 -n newest-cni-644411
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (138.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-672176 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E1128 03:57:26.253977  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/enable-default-cni-546871/client.crt: no such file or directory
E1128 03:57:33.587592  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/flannel-546871/client.crt: no such file or directory
E1128 03:57:55.195787  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:58:22.879278  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/bridge-546871/client.crt: no such file or directory
E1128 03:58:34.222451  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/functional-068418/client.crt: no such file or directory
E1128 03:58:43.673764  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/addons-681229/client.crt: no such file or directory
E1128 03:59:18.807164  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/kindnet-546871/client.crt: no such file or directory
E1128 03:59:19.025320  340515 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17671-333305/.minikube/profiles/auto-546871/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-672176 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m18.092509262s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (138.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-672176 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a46af3fe-d81c-48bd-bec0-22dbb386d2c2] Pending
helpers_test.go:344: "busybox" [a46af3fe-d81c-48bd-bec0-22dbb386d2c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a46af3fe-d81c-48bd-bec0-22dbb386d2c2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.033447672s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-672176 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-672176 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-672176 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.177712404s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-672176 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (629.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-672176 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-672176 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (10m29.560945484s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-672176 -n embed-certs-672176
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (629.85s)

                                                
                                    

Test skip (39/304)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.4/cached-images 0
13 TestDownloadOnly/v1.28.4/binaries 0
14 TestDownloadOnly/v1.28.4/kubectl 0
19 TestDownloadOnly/v1.29.0-rc.0/cached-images 0
20 TestDownloadOnly/v1.29.0-rc.0/binaries 0
21 TestDownloadOnly/v1.29.0-rc.0/kubectl 0
25 TestDownloadOnlyKic 0
39 TestAddons/parallel/Olm 0
51 TestDockerFlags 0
54 TestDockerEnvContainerd 0
56 TestHyperKitDriverInstallOrUpdate 0
57 TestHyperkitDriverSkipUpgrade 0
108 TestFunctional/parallel/DockerEnv 0
109 TestFunctional/parallel/PodmanEnv 0
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
131 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
133 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
134 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
135 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
157 TestGvisorAddon 0
158 TestImageBuild 0
191 TestKicCustomNetwork 0
192 TestKicExistingNetwork 0
193 TestKicCustomSubnet 0
194 TestKicStaticIP 0
225 TestChangeNoneUser 0
228 TestScheduledStopWindows 0
230 TestSkaffold 0
232 TestInsufficientStorage 0
236 TestMissingContainerUpgrade 0
241 TestNetworkPlugins/group/kubenet 4.05
250 TestNetworkPlugins/group/cilium 4.17
265 TestStartStop/group/disable-driver-mounts 0.21
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/binaries
aaa_download_only_test.go:139: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.0/kubectl
aaa_download_only_test.go:155: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:213: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:497: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-546871 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-546871

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-546871

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-546871

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-546871

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-546871

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-546871

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-546871

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-546871

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-546871

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-546871

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-546871

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-546871" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-546871" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-546871

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-546871"

                                                
                                                
----------------------- debugLogs end: kubenet-546871 [took: 3.851796937s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-546871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-546871
--- SKIP: TestNetworkPlugins/group/kubenet (4.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-546871 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-546871" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-546871" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-546871" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-546871

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-546871" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-546871"

                                                
                                                
----------------------- debugLogs end: cilium-546871 [took: 3.992744627s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-546871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-546871
--- SKIP: TestNetworkPlugins/group/cilium (4.17s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-846967" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-846967
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
Copied to clipboard